Abstract
The recent breakthrough in artificial intelligence (AI), especially deep neural networks (DNNs), has affected every branch of science and technology. Particularly, edge AI has been envisioned as a major application scenario to provide DNN-based services at edge devices. This article presents effective methods for edge inference at resource-constrained devices. It focuses on device-edge co-inference, assisted by an edge computing server, and investigates a critical trade-off among the computational cost of the on-device model and the communication overhead of forwarding the intermediate feature to the edge server. A general three-step framework is proposed for the effective inference: model split point selection to determine the on-device model, communication-aware model compression to reduce the on-device computation and the resulting communication overhead simultaneously, and task-oriented encoding of the intermediate feature to further reduce the communication overhead. Experiments demonstrate that our proposed framework achieves a better tradeoff and significantly reduces the inference latency than baseline methods.
Original language | English |
---|---|
Article number | 9311935 |
Pages (from-to) | 20-26 |
Number of pages | 7 |
Journal | IEEE Communications Magazine |
Volume | 58 |
Issue number | 12 |
DOIs | |
Publication status | Published - Dec 2020 |
ASJC Scopus subject areas
- Computer Science Applications
- Computer Networks and Communications
- Electrical and Electronic Engineering