TY - GEN
T1 - Gate-Layer Autoencoders with Application to Incomplete EEG Signal Recovery
AU - El-Fiqi, Heba
AU - Kasmarik, Kathryn
AU - Bezerianos, Anastasios
AU - Tan, Kay Chen
AU - Abbass, Hussein A.
N1 - Funding Information:
ACKNOWLEDGEMENT This project is funded by an Australian Research Council Discovery Grant DP160102037.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - Autoencoders (AE) have been used successfully as unsupervised learners for inferring latent information, learning hidden features and reducing the dimensionality of the data. In this paper, we propose a new AE architecture: Gate-Layer AE (GLAE). The novelty of GLAE lies in its ability to encourage learning of the relationships among different input variables, which affords it with an inherent ability to recover missing variables from the available ones and to act as a concurrent multi-function approximator.GLAE uses a network architecture that associates each input with a binary gate acting as a switch that turns on or off the flow to each input unit, while synchronising its action with data flow to the network. We test GLAE with different coding sizes and compare its performance against the Classic AE, Denoising AE and Variational AE. The evaluation uses Electroencephalograph (EEG) data with an aim to reconstruct the EEG signal when some data are missing. The results demonstrate GLAE's superior performance in reconstructing EEG signals with up to 25% missing data in an input stream.
AB - Autoencoders (AE) have been used successfully as unsupervised learners for inferring latent information, learning hidden features and reducing the dimensionality of the data. In this paper, we propose a new AE architecture: Gate-Layer AE (GLAE). The novelty of GLAE lies in its ability to encourage learning of the relationships among different input variables, which affords it with an inherent ability to recover missing variables from the available ones and to act as a concurrent multi-function approximator.GLAE uses a network architecture that associates each input with a binary gate acting as a switch that turns on or off the flow to each input unit, while synchronising its action with data flow to the network. We test GLAE with different coding sizes and compare its performance against the Classic AE, Denoising AE and Variational AE. The evaluation uses Electroencephalograph (EEG) data with an aim to reconstruct the EEG signal when some data are missing. The results demonstrate GLAE's superior performance in reconstructing EEG signals with up to 25% missing data in an input stream.
UR - http://www.scopus.com/inward/record.url?scp=85073259870&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2019.8852101
DO - 10.1109/IJCNN.2019.8852101
M3 - Conference article published in proceeding or book
AN - SCOPUS:85073259870
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 1
EP - 8
BT - 2019 International Joint Conference on Neural Networks, IJCNN 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 International Joint Conference on Neural Networks, IJCNN 2019
Y2 - 14 July 2019 through 19 July 2019
ER -