Abstract
The escalating proliferation of user generated contents such as videos and images are dominating the network traffic. The optimal strategy for mitigating backbone congestion and minimizing user request latency lies in prudent caching at edge stations within distributed autonomous networks, obviating the necessity to transmit data to the cloud. However, accurately caching content based on distributed autonomous networks requires elaborative collaboration between edge servers, which remains a great challenge, especially when content is highly dynamic and the storage resources of edge stations are limited. To tackle this challenge, this paper proposes a contextual bandit-based online caching algorithm for evaluating the optimal content hit rate reward, which can adapt to the constantly changing stream of emerging content. We build the content space, BS space, and a fine-grained space searching method to cache contents and corresponding edge stations. Furthermore, to perform collaborative caching and sharing between edges, we propose a federated autonomous multi-layer caching framework, whereby each server can locally learn the model for accurate caching and a synchronous mechanism is set up for global updating, further improving the hit rates. Finally, we perform theoretical proofs and simulations, demonstrating that our regret is sublinear and our caching algorithm outperforms several state-of-the-art algorithms.
Original language | English |
---|---|
Pages (from-to) | 8355-8369 |
Number of pages | 15 |
Journal | IEEE Transactions on Mobile Computing |
Volume | 23 |
Issue number | 8 |
DOIs | |
Publication status | Published - 4 Jan 2024 |
Keywords
- Content caching
- contextual bandit
- federated learning
- multi-layer caching
- online learning
ASJC Scopus subject areas
- Software
- Computer Networks and Communications
- Electrical and Electronic Engineering