Abstract
Driver attention estimation is one of the key technologies for intelligent vehicles. The existing related methods only focus on the scene image or the driver's gaze or head pose. The purpose of this paper is to propose a more reasonable and feasible method based on a dual-view scene with calibration-free gaze direction. According to human visual mechanisms, the low-level features, static visual saliency map and dynamic optical flow information are extracted as input feature maps, which combine the high-level semantic descriptions and a gaze probability map transformed from the gaze direction. A multi-resolution neural network is proposed to handle the calibration-free features. The proposed method is verified on a virtual reality experimental platform that collected more than 550,000 samples and obtained a more accurate ground truth. The experiments show that the proposed method is feasible and better than the state-of-the-art methods based on multiple widely used metrics. This study also provides a discussion of the effects of different landscapes, times and weather conditions on the performance.
Original language | English |
---|---|
Pages (from-to) | 1800-1808 |
Journal | IEEE Transactions on Industrial Electronics |
Volume | 69 |
Issue number | 2 |
DOIs | |
Publication status | Published - Feb 2022 |
Externally published | Yes |
Keywords
- data-driven estimation
- Driver attention
- Estimation
- Feature extraction
- gaze direction
- Gaze tracking
- Glass
- Optical imaging
- saliency map
- Task analysis
- Vehicles
ASJC Scopus subject areas
- Control and Systems Engineering
- Electrical and Electronic Engineering