A deep Bayesian tensor-based system for video recommendation

Wei Lu, Fu Lai Chung, Wenhao Jiang, Martin Ester, Wei Liu

Research output: Journal article publicationJournal articleAcademic researchpeer-review

7 Citations (Scopus)

Abstract

With the availability of abundant online multi-relational video information, recommender systems that can effectively exploit these sorts of data and suggest creatively interesting items will become increasingly important. Recent research illustrates that tensor models offer effective approaches for complex multi-relational data learning and missing element completion. So far, most tensor-based user clustering models have focused on the accuracy of recommendation. Given the dynamic nature of online media, recommendation in this setting is more challenging as it is difficult to capture the users' dynamic topic distributions in sparse data settings as well as to identify unseen items as candidates of recommendation. Targeting at constructing a recommender system that can encourage more creativity, a deep Bayesian probabilistic tensor framework for tag and item recommendation is proposed. During the score ranking processes, a metric called Bayesian surprise is incorporated to increase the creativity of the recommended candidates. The new algorithm, called Deep Canonical PARAFAC Factorization (DCPF), is evaluated on both synthetic and large-scale real-world problems. An empirical study for video recommendation demonstrates the superiority of the proposed model and indicates that it can better capture the latent patterns of interactions and generates interesting recommendations based on creative tag combinations.

Original languageEnglish
Article number7
JournalACM Transactions on Information Systems
Volume37
Issue number1
DOIs
Publication statusPublished - 1 Jan 2019

Keywords

  • Bayesian methods
  • Computational creativity
  • Tensor decomposition

ASJC Scopus subject areas

  • Information Systems
  • Business, Management and Accounting(all)
  • Computer Science Applications

Cite this