TY - GEN
T1 - Identifying Human Out-of-the-Loop in Cruising Flights Using EEG Spectral Features with Deep Learning
AU - Yiu, Cho Yin
AU - Ng, Kam K.H.
AU - Li, Qinbiao
AU - Yuan, Xin
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025/5
Y1 - 2025/5
N2 - Human out-of-the-loop has been a significant problem in flight operations causing two notable accidents, Asiana Flight 214 and Air France Flight 447. It is usually caused by the monotonous nature of the automation monitoring task. With the monotonous nature of the task, pilots may experience reduced vigilance and may not effectively monitor automation. However, pilots out of the flight control loop may be unable to understand the situation well and cannot cope when required to take over from the automation in emergencies. It is essential to improve human-automation interaction for timely corrective actions. As such, this paper proposes a neurophysiological and image-driven deep learning approach to identify human out-of-the-loop (OOTL) episodes during cruising flight operations, in which automation leads to flight control while pilots are significantly less involved. We collected EEG data from 24 cadet pilots during three cruising flights from Hong Kong International Airport to Fukuoka Airport on an Airbus A320 flight simulator under different levels of automation. These include the baseline with full automation (FAF), partially automated flight (PAF), and manual flight (MF). After obtaining the data, we processed the data and transformed it into spectral data for each band wave and two-second epoch. Data were then plotted on a topographical map to generate a dataset of 112,243 epochs for image-based mental state classification. Each epoch contains five (128, 128) RGB images to show the band power of the pilots during the two-second epoch. We thus employed deep learning and designed a tailor-made model structure of convolutional neural networks to classify the mental states. The results indicate that the proposed model achieves a test accuracy of 99.30%, which outperforms the baseline models by at least 33.74%. The proposed model can be applied to identify potential human OOTL in advance so that proper countermeasures can be taken.
AB - Human out-of-the-loop has been a significant problem in flight operations causing two notable accidents, Asiana Flight 214 and Air France Flight 447. It is usually caused by the monotonous nature of the automation monitoring task. With the monotonous nature of the task, pilots may experience reduced vigilance and may not effectively monitor automation. However, pilots out of the flight control loop may be unable to understand the situation well and cannot cope when required to take over from the automation in emergencies. It is essential to improve human-automation interaction for timely corrective actions. As such, this paper proposes a neurophysiological and image-driven deep learning approach to identify human out-of-the-loop (OOTL) episodes during cruising flight operations, in which automation leads to flight control while pilots are significantly less involved. We collected EEG data from 24 cadet pilots during three cruising flights from Hong Kong International Airport to Fukuoka Airport on an Airbus A320 flight simulator under different levels of automation. These include the baseline with full automation (FAF), partially automated flight (PAF), and manual flight (MF). After obtaining the data, we processed the data and transformed it into spectral data for each band wave and two-second epoch. Data were then plotted on a topographical map to generate a dataset of 112,243 epochs for image-based mental state classification. Each epoch contains five (128, 128) RGB images to show the band power of the pilots during the two-second epoch. We thus employed deep learning and designed a tailor-made model structure of convolutional neural networks to classify the mental states. The results indicate that the proposed model achieves a test accuracy of 99.30%, which outperforms the baseline models by at least 33.74%. The proposed model can be applied to identify potential human OOTL in advance so that proper countermeasures can be taken.
KW - Human out-of-the-loop
KW - human-automation interaction
KW - mental state classification
KW - mind wandering
KW - monotony
UR - http://www.scopus.com/inward/record.url?scp=105008653170&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-94153-5_12
DO - 10.1007/978-3-031-94153-5_12
M3 - Conference article published in proceeding or book
AN - SCOPUS:105008653170
SN - 9783031941528
VL - 2523
T3 - Communications in Computer and Information Science
SP - 123
EP - 133
BT - HCI International 2025 Posters - 27th International Conference on Human-Computer Interaction, HCII 2025, Proceedings
A2 - Stephanidis, Constantine
A2 - Antona, Margherita
A2 - Ntoa, Stavroula
A2 - Salvendy, Gavriel
PB - Springer Science and Business Media Deutschland GmbH
T2 - 27th International Conference on Human-Computer Interaction, HCII 2025
Y2 - 22 June 2025 through 27 June 2025
ER -