Understanding Memory Modules on Learning Simple Algorithms

Kexin Wang, Yu Zhou, Shaonan Wang, Jiajun Zhang, Chengqing Zong

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

Abstract

Recent work has shown that memory modules are crucial for the generalization ability of neural networks on learning simple algorithms. However, we still have little understanding of the working mechanism of memory modules. To alleviate this problem, we apply a two-step analysis pipeline consisting of first inferring hypothesis about what strategy the model has learned according to visualization and then verify it by a novel proposed qualitative analysis method based on dimension reduction.

Using this method, we have analyzed two popular memory-augmented neural networks, neural Turing machine and stack-augmented neural network on two simple algorithm tasks including reversing a random sequence and evaluation of arithmetic expressions. Results have shown that on the former task both models can learn to generalize and on the latter task only the stack-augmented model can do so. We show that different strategies are learned by the models, in which specific categories of input are monitored and different policies are made based on that to change the memory.
Original languageEnglish
Title of host publicationProceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence
EditorsTim Miller, Rosina Weber, Daniele Magazzeni
Publication statusPublished - Aug 2019
Externally publishedYes
EventIJCAI 2019 Workshop on Explainable Artificial Intelligence (XAI) - , Macao
Duration: 11 Aug 2019 → …
https://sites.google.com/view/xai2019/home

Workshop

WorkshopIJCAI 2019 Workshop on Explainable Artificial Intelligence (XAI)
Country/TerritoryMacao
Period11/08/19 → …
Internet address

Fingerprint

Dive into the research topics of 'Understanding Memory Modules on Learning Simple Algorithms'. Together they form a unique fingerprint.

Cite this