Lifelong topic modeling with knowledge-enhanced adversarial network

Xuewen Zhang, Yanghui Rao, Qing Li

Research output: Journal article publicationJournal articleAcademic researchpeer-review

2 Citations (Scopus)

Abstract

Lifelong topic modeling has attracted much attention in natural language processing (NLP), since it can accumulate knowledge learned from past for the future task. However, the existing lifelong topic models often require complex derivation or only utilize part of the context information. In this study, we propose a knowledge-enhanced adversarial neural topic model (KATM) and extend it to LKATM for lifelong topic modeling. KATM employs a knowledge extractor to encourage the generator to learn interpretable document representations and retrieve knowledge from the generated documents. LKATM incorporates knowledge from the previous trained KATM into the current model to learn from prior models without catastrophic forgetting. Experiments on four benchmark text streams validate the effectiveness of our KATM and LKATM in topic discovery and document classification.

Original languageEnglish
Pages (from-to)219-238
Number of pages20
JournalWorld Wide Web
Volume25
Issue number1
DOIs
Publication statusPublished - Jan 2022

Keywords

  • Knowledge distillation
  • Lifelong learning
  • Neural topic modeling

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Lifelong topic modeling with knowledge-enhanced adversarial network'. Together they form a unique fingerprint.

Cite this