Data-driven Estimation of Driver Attention using Calibration-free Eye Gaze and Scene Features

Zhongxu Hu, Chen Lv, Peng Hang, Chao Huang, Yang Xing

Research output: Journal article publicationJournal articleAcademic researchpeer-review

58 Citations (Scopus)

Abstract

Driver attention estimation is one of the key technologies for intelligent vehicles. The existing related methods only focus on the scene image or the driver's gaze or head pose. The purpose of this paper is to propose a more reasonable and feasible method based on a dual-view scene with calibration-free gaze direction. According to human visual mechanisms, the low-level features, static visual saliency map and dynamic optical flow information are extracted as input feature maps, which combine the high-level semantic descriptions and a gaze probability map transformed from the gaze direction. A multi-resolution neural network is proposed to handle the calibration-free features. The proposed method is verified on a virtual reality experimental platform that collected more than 550,000 samples and obtained a more accurate ground truth. The experiments show that the proposed method is feasible and better than the state-of-the-art methods based on multiple widely used metrics. This study also provides a discussion of the effects of different landscapes, times and weather conditions on the performance.

Original languageEnglish
Pages (from-to)1800-1808
JournalIEEE Transactions on Industrial Electronics
Volume69
Issue number2
DOIs
Publication statusPublished - Feb 2022
Externally publishedYes

Keywords

  • data-driven estimation
  • Driver attention
  • Estimation
  • Feature extraction
  • gaze direction
  • Gaze tracking
  • Glass
  • Optical imaging
  • saliency map
  • Task analysis
  • Vehicles

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Data-driven Estimation of Driver Attention using Calibration-free Eye Gaze and Scene Features'. Together they form a unique fingerprint.

Cite this