Abstract
Lifelong topic modeling has attracted much attention in natural language processing (NLP), since it can accumulate knowledge learned from past for the future task. However, the existing lifelong topic models often require complex derivation or only utilize part of the context information. In this study, we propose a knowledge-enhanced adversarial neural topic model (KATM) and extend it to LKATM for lifelong topic modeling. KATM employs a knowledge extractor to encourage the generator to learn interpretable document representations and retrieve knowledge from the generated documents. LKATM incorporates knowledge from the previous trained KATM into the current model to learn from prior models without catastrophic forgetting. Experiments on four benchmark text streams validate the effectiveness of our KATM and LKATM in topic discovery and document classification.
Original language | English |
---|---|
Pages (from-to) | 219-238 |
Number of pages | 20 |
Journal | World Wide Web |
Volume | 25 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jan 2022 |
Keywords
- Knowledge distillation
- Lifelong learning
- Neural topic modeling
ASJC Scopus subject areas
- Software
- Hardware and Architecture
- Computer Networks and Communications