Abstract
Drive-thru Internet has been considered as an effective Internet access method for Internet of Vehicles (IoV). Through the opportunistic vehicle-to-roadside WiFi connection, it can provide high throughput performance with low communication cost for IoV applications, such as intelligent transportation system, automotive infotainment, etc. However, its usability is highly affected by a fundamental issue called rate adaptation (RA), which is to adjust the modulation and coding rate to adapt to the dynamic wireless channel between the vehicle and the roadside access point (AP). Conventional WiFi RA schemes are designed for indoor or quasistatic scenarios and do not account for the channel variations in drive-thru Internet. In this article, we study the limitation of applying existing RA schemes in drive-thru Internet and propose a reinforcement learning (RL)-based RA scheme to capture the potential channel variation patterns and efficiently select the rate for every vehicle's egress frame. Simulation results demonstrate that the proposed RA scheme outperforms the existing schemes in network throughput and that the efficiency of the learning model can be generalized under various conditions. The proposed RA method can provide useful inspirations for designing robust and scalable link adaptation protocols in IoV.
Original language | English |
---|---|
Article number | 8954653 |
Pages (from-to) | 3114-3123 |
Number of pages | 10 |
Journal | IEEE Internet of Things Journal |
Volume | 7 |
Issue number | 4 |
DOIs | |
Publication status | Published - Apr 2020 |
Keywords
- Drive-thru Internet
- rate adaptation (RA)
- reinforcement learning (RL)
- vehicular networks
ASJC Scopus subject areas
- Signal Processing
- Information Systems
- Hardware and Architecture
- Computer Science Applications
- Computer Networks and Communications