Abstract
In MENs, the task offloading services are expected to free MUs from complex tasks, which contributes to reduce the service latency and thus improves the QoS. Although the well-proven cooperative task offloading proposals effectively address such requirements by encouraging the cooperation among MUs and edges, the QoE from MUs' perspective still remains low. Therefore, exploring the QoE-based offloading mechanism for better serving MUs in MENs is an urgent task. In this article, by reshaping the conventional offloading process, we explore a novel cooperative offloading mechanism in order to improve QoE. Our contributions can be identified in three aspects: the task-hub construction optimizes traditional MENs, which supports efficient MU management and low-latency communication; the task preprocessing thoroughly refines the task inputs according to the priority and redundancy, resulting in higher QoE; and the task scheduling with Deep Reinforcement Learning (DRL) named Double Dueling Deterministic Policy Gradient (Double DDPG) makes rational offloading policy that guarantees minimal service latency. Experimental results show that our proposed approach achieves much higher QoE performance than the existing schemes.
Original language | English |
---|---|
Article number | 9076115 |
Pages (from-to) | 111-117 |
Number of pages | 7 |
Journal | IEEE Wireless Communications |
Volume | 27 |
Issue number | 3 |
DOIs | |
Publication status | Published - Jun 2020 |
ASJC Scopus subject areas
- Computer Science Applications
- Electrical and Electronic Engineering