Multimodal human attention detection for reading

Jiajia Li, Grace Ngai, Hong Va Leong, Stephen Chan

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

3 Citations (Scopus)


Affective computing in human-computer interaction research enables computers to understand human affects or emotions to provide better service. In this paper, we investigate the detection of human attention useful in intelligent e-learning applications. Our principle is to use only ubiquitous hardware available in most computer systems, namely, webcam and mouse. Information from multiple modalities is fused together for effective human attention detection. We invite human subjects to carry out experiments in reading articles being subjected to different kinds of distraction to induce different attention levels. Machine-learning techniques are applied to identify useful features to recognize human attention level. Our results indicate improved performance with multimodal inputs, suggesting an interesting affective computing direction.
Original languageEnglish
Title of host publication2016 Symposium on Applied Computing, SAC 2016
PublisherAssociation for Computing Machinery
Number of pages6
ISBN (Electronic)9781450337397
Publication statusPublished - 4 Apr 2016
Event31st Annual ACM Symposium on Applied Computing, SAC 2016 - Pisa, Italy
Duration: 4 Apr 20168 Apr 2016


Conference31st Annual ACM Symposium on Applied Computing, SAC 2016


  • Facial features
  • Human attention level
  • Mouse dynamics
  • Multimodal interaction

ASJC Scopus subject areas

  • Software


Dive into the research topics of 'Multimodal human attention detection for reading'. Together they form a unique fingerprint.

Cite this