Abstract
Recommender systems aim to accurately and actively provide users with potentially interesting items (products, information or services). Deep reinforcement learning has been successfully applied to recommender systems, but still heavily suffer from data sparsity and cold-start in real-world tasks. In this work, we propose an effective way to address such issues by leveraging the pervasive social networks among users in the estimation of action-values (Q). Specifically, we develop a Social Attentive Deep Q-network (SADQN) to approximate the optimal action-value function based on the preferences of both individual users and social neighbors, by successfully utilizing a social attention layer to model the influence between them. Further, we propose an enhanced variant of SADQN, termed SADQN++, to model the complicated and diverse trade-offs between personal preferences and social influence for all involved users, making the agent more powerful and flexible in learning the optimal policies. The experimental results on real-world datasets demonstrate that the proposed SADQNs remarkably outperform the state-of-the-art deep reinforcement learning agents, with reasonable computation cost.
Original language | English |
---|---|
Pages (from-to) | 2443-2457 |
Number of pages | 15 |
Journal | IEEE Transactions on Knowledge and Data Engineering |
Volume | 34 |
Issue number | 5 |
DOIs | |
Publication status | Published - 1 May 2022 |
Keywords
- DQN
- recommender systems
- reinforcement learning
- social networks
ASJC Scopus subject areas
- Information Systems
- Computer Science Applications
- Computational Theory and Mathematics