Abstract
Network Virtualization (NV) techniques allow multiple virtual network requests to beneficially share resources on the same substrate network, such as node computational resources and link bandwidth. As the most famous family member of NV techniques, virtual network embedding is capable of efficiently allocating the limited network resources to the users on the same substrate network. However, traditional heuristic virtual network embedding algorithms generally follow a static operating mechanism, which cannot adapt well to the dynamic network structures and environments, resulting in inferior nodes ranking and embedding strategies. Some reinforcement learning aided embedding algorithms have been conceived to dynamically update the decision-making strategies, while the node embedding of the same request is discretized and its continuity is ignored. To address this problem, a Continuous-Decision virtual network embedding scheme relying on Reinforcement Learning (CDRL) is proposed in our paper, which regards the node embedding of the same request as a time-series problem formulated by the classic seq2seq model. Moreover, two traditional heuristic embedding algorithms as well as the classic reinforcement learning aided embedding algorithm are used for benchmarking our prpposed CDRL algorithm. Finally, simulation results show that our proposed algorithm is superior to the other three algorithms in terms of long-term average revenue, revenue to cost and acceptance ratio.
Original language | English |
---|---|
Article number | 8982091 |
Pages (from-to) | 864-875 |
Number of pages | 12 |
Journal | IEEE Transactions on Network and Service Management |
Volume | 17 |
Issue number | 2 |
DOIs | |
Publication status | Published - Jun 2020 |
Keywords
- Reinforcement learning
- continuous decision
- seq2seq
- time-series
- virtual network embedding
ASJC Scopus subject areas
- Computer Networks and Communications
- Electrical and Electronic Engineering