Abstract
The utilization of machine learning models has become prevalent in various wireless devices to deduce the network status from various tracing data, e.g., link capacity, channel fading, etc. To cope with the increasing complexity of the network environment, deep neural models are leveraged to mine the high-dimensional network tracing data for a variety of intelligent applications. However, due to the limited resource that allocated to the network stack process, it is infeasible to train deep neural models due to the constrained computing power and absence of large-scale labeled data. Besides, the network device can barely support quick inference of large model, thus cannot support promptly response to the network conditions. In this paper, we propose a practical fast model inference system that can run high accuracy model over tiny wireless devices that are constrained in both memory and CPU power. Specifically, we design a knowledge-distillation based training method for a light-weight model that deployed at device side that can migrate the knowledge from a well-trained deep model. It is shown that our system can support fast model inference over tiny devices, which can greatly improve the network throughput in a multi-user access system by inferring the transmission collision from channel error, and thus can improve the accuracy of the link adaptation. We have conducted practical experiments to verify our system and discuss the possible extensions.
Original language | English |
---|---|
Title of host publication | 2023 IEEE 34th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC) |
Pages | 1-5 |
Publication status | Published - 31 Oct 2023 |