Social Attentive Deep Q-Networks for Recommender Systems

Yu Lei, Zhitao Wang, Wenjie Li, Hongbin Pei, Quanyu Dai

Research output: Journal article publicationJournal articleAcademic researchpeer-review

12 Citations (Scopus)


Recommender systems aim to accurately and actively provide users with potentially interesting items (products, information or services). Deep reinforcement learning has been successfully applied to recommender systems, but still heavily suffer from data sparsity and cold-start in real-world tasks. In this work, we propose an effective way to address such issues by leveraging the pervasive social networks among users in the estimation of action-values (Q). Specifically, we develop a Social Attentive Deep Q-network (SADQN) to approximate the optimal action-value function based on the preferences of both individual users and social neighbors, by successfully utilizing a social attention layer to model the influence between them. Further, we propose an enhanced variant of SADQN, termed SADQN++, to model the complicated and diverse trade-offs between personal preferences and social influence for all involved users, making the agent more powerful and flexible in learning the optimal policies. The experimental results on real-world datasets demonstrate that the proposed SADQNs remarkably outperform the state-of-the-art deep reinforcement learning agents, with reasonable computation cost.

Original languageEnglish
Pages (from-to)2443-2457
Number of pages15
JournalIEEE Transactions on Knowledge and Data Engineering
Issue number5
Publication statusPublished - 1 May 2022


  • DQN
  • recommender systems
  • reinforcement learning
  • social networks

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics


Dive into the research topics of 'Social Attentive Deep Q-Networks for Recommender Systems'. Together they form a unique fingerprint.

Cite this