QoE-Based cooperative task offloading with deep reinforcement learning in mobile edge networks

Xiaoming He, Haodong Lu, Huawei Huang, Yingchi Mao, Kun Wang, Song Guo

Research output: Journal article publicationJournal articleAcademic researchpeer-review

5 Citations (Scopus)

Abstract

In MENs, the task offloading services are expected to free MUs from complex tasks, which contributes to reduce the service latency and thus improves the QoS. Although the well-proven cooperative task offloading proposals effectively address such requirements by encouraging the cooperation among MUs and edges, the QoE from MUs' perspective still remains low. Therefore, exploring the QoE-based offloading mechanism for better serving MUs in MENs is an urgent task. In this article, by reshaping the conventional offloading process, we explore a novel cooperative offloading mechanism in order to improve QoE. Our contributions can be identified in three aspects: the task-hub construction optimizes traditional MENs, which supports efficient MU management and low-latency communication; the task preprocessing thoroughly refines the task inputs according to the priority and redundancy, resulting in higher QoE; and the task scheduling with Deep Reinforcement Learning (DRL) named Double Dueling Deterministic Policy Gradient (Double DDPG) makes rational offloading policy that guarantees minimal service latency. Experimental results show that our proposed approach achieves much higher QoE performance than the existing schemes.

Original languageEnglish
Article number9076115
Pages (from-to)111-117
Number of pages7
JournalIEEE Wireless Communications
Volume27
Issue number3
DOIs
Publication statusPublished - Jun 2020

ASJC Scopus subject areas

  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this