TY - GEN
T1 - Toward abstraction from multi-modal data
T2 - 2017 International Joint Conference on Neural Networks, IJCNN 2017
AU - Zhong, Junpei
AU - Cangelosi, Angelo
AU - Ogata, Tetsuya
N1 - Funding Information:
ACKNOWLEDGMENT The research was supported by Waseda SGU Program, the EU project POETICON++ under grant agreement 288382 and the UK EPSRC project BABEL. JZ would like to thank FH for the working space provided when the paper was being drafted.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/6/30
Y1 - 2017/6/30
N2 - The abstraction tasks are challenging for multi-modal sequences as they require a deeper semantic understanding and a novel text generation for the data. Although the recurrent neural networks (RNN) can be used to model the context of the time-sequences, in most cases the long-term dependencies of multi-modal data make the back-propagation through time training of RNN tend to vanish in the time domain. Recently, inspired from Multiple Time-scale Recurrent Neural Network (MTRNN) [1], an extension of Gated Recurrent Unit (GRU), called Multiple Time-scale Gated Recurrent Unit (MTGRU), has been proposed [2] to learn the long-term dependencies in natural language processing. Particularly it is also able to accomplish the abstraction task for paragraphs given that the time constants are well defined. In this paper, we compare the MTRNN and MTGRU in terms of its learning performances as well as their abstraction representation on higher level (with a slower neural activation). This was done by conducting two studies based on a smaller dataset (two-dimension time sequences from non-linear functions) and a relatively large data-set (43-dimension time sequences from iCub manipulation tasks with multi-modal data). We conclude that gated recurrent mechanisms may be necessary for learning long-term dependencies in large dimension multi-modal data-sets (e.g. learning of robot manipulation), even when natural language commands was not involved. But for smaller learning tasks with simple time-sequences, generic version of recurrent models, such as MTRNN, were sufficient to accomplish the abstraction task.
AB - The abstraction tasks are challenging for multi-modal sequences as they require a deeper semantic understanding and a novel text generation for the data. Although the recurrent neural networks (RNN) can be used to model the context of the time-sequences, in most cases the long-term dependencies of multi-modal data make the back-propagation through time training of RNN tend to vanish in the time domain. Recently, inspired from Multiple Time-scale Recurrent Neural Network (MTRNN) [1], an extension of Gated Recurrent Unit (GRU), called Multiple Time-scale Gated Recurrent Unit (MTGRU), has been proposed [2] to learn the long-term dependencies in natural language processing. Particularly it is also able to accomplish the abstraction task for paragraphs given that the time constants are well defined. In this paper, we compare the MTRNN and MTGRU in terms of its learning performances as well as their abstraction representation on higher level (with a slower neural activation). This was done by conducting two studies based on a smaller dataset (two-dimension time sequences from non-linear functions) and a relatively large data-set (43-dimension time sequences from iCub manipulation tasks with multi-modal data). We conclude that gated recurrent mechanisms may be necessary for learning long-term dependencies in large dimension multi-modal data-sets (e.g. learning of robot manipulation), even when natural language commands was not involved. But for smaller learning tasks with simple time-sequences, generic version of recurrent models, such as MTRNN, were sufficient to accomplish the abstraction task.
UR - http://www.scopus.com/inward/record.url?scp=85030972983&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2017.7966312
DO - 10.1109/IJCNN.2017.7966312
M3 - Conference article published in proceeding or book
AN - SCOPUS:85030972983
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 3625
EP - 3632
BT - 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 14 May 2017 through 19 May 2017
ER -