TY - GEN
T1 - Parallel Network Slicing for Multi-SP Services
AU - Han, Rongxin
AU - Chen, Dezhi
AU - Guo, Song
AU - Fu, Xiaoyuan
AU - Wang, Jingyu
AU - Qi, Qi
AU - Liao, Jianxin
N1 - Funding Information:
This work was supported in part by the National Key R&D Program of China 2020YFB1807800, in part by the National Natural Science Foundation of China under Grants (62171057, 62071067, 62101064, 62001054), in part by the Beijing University of Posts and Telecommunications-China Mobile Research Institute Joint Innovation Center, in part by the Key-Area Research and Development Program of Guangdong Province 2021B0101400003.
Publisher Copyright:
© 2022 ACM.
PY - 2022/8/29
Y1 - 2022/8/29
N2 - Network slicing is rapidly prevailing in edge cloud, which provides computing, network and storage resources for various services. When the multiple service providers (SPs) respond to their tenants in parallel, individual decisions on the dynamic and shared edge cloud may lead to resource conflicts. The resource conflicts problem can be formulated as a multi-objective constrained optimization model; however, it is challenging to solve it due to the complexity of resource interactions caused by co-existing multi-SP policies. Therefore, we propose a CommDRL scheme based on multi-agent deep reinforcement learning (MADRL) and multi-agent communication to tackle the challenge. CommDRL can coordinate network resources between SPs with less overhead. Moreover, we design the neurons hotplugging learning in CommDRL to deal with dynamic edge cloud, which realizes scalability without a high cost of model retraining. Experiments demonstrate that CommDRL can successfully obtain deployment policies and easily adapt to various network scales. It improves the accepted requests by 7.4%, reduces resource conflicts by 14.5%, and shortens the model convergence time by 83.3%.
AB - Network slicing is rapidly prevailing in edge cloud, which provides computing, network and storage resources for various services. When the multiple service providers (SPs) respond to their tenants in parallel, individual decisions on the dynamic and shared edge cloud may lead to resource conflicts. The resource conflicts problem can be formulated as a multi-objective constrained optimization model; however, it is challenging to solve it due to the complexity of resource interactions caused by co-existing multi-SP policies. Therefore, we propose a CommDRL scheme based on multi-agent deep reinforcement learning (MADRL) and multi-agent communication to tackle the challenge. CommDRL can coordinate network resources between SPs with less overhead. Moreover, we design the neurons hotplugging learning in CommDRL to deal with dynamic edge cloud, which realizes scalability without a high cost of model retraining. Experiments demonstrate that CommDRL can successfully obtain deployment policies and easily adapt to various network scales. It improves the accepted requests by 7.4%, reduces resource conflicts by 14.5%, and shortens the model convergence time by 83.3%.
KW - MADRL
KW - Multi-agent communication
KW - Network slicing
KW - Neurons hotplugging learning
KW - Resource conflict
UR - http://www.scopus.com/inward/record.url?scp=85148584541&partnerID=8YFLogxK
U2 - 10.1145/3545008.3545070
DO - 10.1145/3545008.3545070
M3 - Conference article published in proceeding or book
AN - SCOPUS:85148584541
T3 - ACM International Conference Proceeding Series
SP - 1
EP - 11
BT - 51st International Conference on Parallel Processing, ICPP 2022 - Main Conference Proceedings
PB - Association for Computing Machinery
T2 - 51st International Conference on Parallel Processing, ICPP 2022
Y2 - 29 August 2022 through 1 September 2022
ER -