Predictive Models of Gaze Positions via Components Derived from EEG by SOBI-DANS

Akaysha C. Tang (Corresponding Author), Rui Sun, Cynthina Chan, Janet Hsiao

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

Abstract

Ocular artifacts in EEG have long been viewed as noise in the analysis of brain signals from EEG data in both basic and applied research. Many methods, including blind source separation (BSS) have been used to better extract such artifacts in order to enable their removal. Recently we took a different approach: instead of treating eye movement related EEG signals as noise, we considered and validated the concept that when properly separated from the brain signals and other sources of noise, EEG signals associated with horizontal and vertical eye movements as derived from our hybrid SOBI-DANS methods can be signals encoding the direction and distance of eye movement. This work raised an exciting possibility that such components may be used to build predictive models for determining gaze position, without the use of EOG or eye-trackers, thus bypassing the problem of co-registration of neural signals from EEG and a separate eye-tracker. As an initial step in building such a predictive model, we first sought to determine what would be the minimum amount of data needed to make such a predictive model using a highly motivated individual as a research participant. We found that even in a highly motivated participant, across different performance measures, three-four trials of calibration eye movements are needed before the prediction performance on the subsequent eye movement trials reaching asymptote. This result suggests that if one is to build a model using earlier trials to predict subsequent trials’ eye movement from EEG, the number of calibration trials need to be at least doubled from the typical two repetitions of saccades made in the previously used calibration task. This work complements a recently published parallel study in which we further explored the generalization of such predictive model in a different dimension, to a task involving eye movement during the tracking of a horizontally moving target (known as the smooth pursuit task). Together these works serve as initial demonstrations of a novel approach in achieving readily co-registered eye movement and neural signals all from a single EEG recording. Future work can examine extension of this work to prediction in the vertical direction, to establishing norms for different populations, and to applications to studies of natural reading.

Original languageEnglish
Title of host publicationAdvances in Information and Communication - Proceedings of the 2024 Future of Information and Communication Conference FICC
EditorsKohei Arai
PublisherSpringer Science and Business Media Deutschland GmbH
Pages687-698
Number of pages12
ISBN (Print)9783031540523
DOIs
Publication statusPublished - 17 Mar 2024
EventFuture of Information and Communication Conference, FICC 2024 - Berlin, Germany
Duration: 4 Apr 20245 Apr 2024

Publication series

NameLecture Notes in Networks and Systems
Volume921 LNNS
ISSN (Print)2367-3370
ISSN (Electronic)2367-3389

Conference

ConferenceFuture of Information and Communication Conference, FICC 2024
Country/TerritoryGermany
CityBerlin
Period4/04/245/04/24

Keywords

  • Blind source separation
  • DANS
  • Event-Related Potentials (ERPs)
  • Eye-tracking
  • Natural viewing
  • Ocular artifact
  • Saccadic eye movement
  • SOBI

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Signal Processing
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Predictive Models of Gaze Positions via Components Derived from EEG by SOBI-DANS'. Together they form a unique fingerprint.

Cite this