Abstract
As a key enabling technology for 5G, network function virtualization abstracts services into software-based service function chains (SFCs), facilitating mission-critical services with high-reliability requirements. However, it is challenging to cost-effectively provide reliable SFCs in dynamic environments due to delayed rewards caused by future SFC requests, limited infrastructure resources, and heterogeneity in hardware and software reliability. Although deep reinforcement learning (DRL) can effectively capture delayed rewards in dynamic environments, its trial-and-error exploration in a vast solution space with massive infeasible solutions may lead to frequent constraint violations and traps in poor local optima. To address these challenges, we propose a RuleDRL algorithm that combines the capability of DRL to capture delayed rewards and the strength of rule-based schemes to explore high-quality solutions without violating constraints. Specifically, we first formulate the reliable SFC provision problem as an integer nonlinear programming problem, which is proven to be NP-hard. Then, we jointly design DRL and rule-based schemes that are coupled to make the final decision and establish a bounded approximation ratio in general cases. Extensive trace-driven simulations show that RuleDRL can save the total cost by up to 65.67% and improve the SFC acceptance ratio by up to 82%, compared to the state-of-the-art solution.
Original language | English |
---|---|
Pages (from-to) | 3651-3664 |
Number of pages | 14 |
Journal | IEEE Transactions on Services Computing |
Volume | 16 |
Issue number | 5 |
DOIs | |
Publication status | Published - 1 Sept 2023 |
Keywords
- 5G
- Bandwidth
- Costs
- deep reinforcement learning
- edge computing
- Hardware
- Network function virtualization
- Reliability
- Reliability theory
- Software
- Software reliability
ASJC Scopus subject areas
- Hardware and Architecture
- Computer Science Applications
- Computer Networks and Communications
- Information Systems and Management