TY - JOUR
T1 - A visual reasoning-based approach for driving experience improvement in the AR-assisted head-up displays
AU - Liang, Yongshi
AU - Zheng, Pai
AU - Xia, Liqiao
N1 - Funding Information:
The work described in this paper was supported by the Smart Traffic Fund of the Transport Department (Ref. PSRI/35/2202/PR), HKSAR, China.
Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2023/1
Y1 - 2023/1
N2 - Enabled by advanced data analytics and intelligent computing, augmented reality head-up displays (AR-HUDs) are appraised with a certain degree of intelligence towards an in-car assistance system providing more convenience for drivers and ensuring safer traffic. Nevertheless, current AR-HUDs systems fail to analyze perceptual results with recommended driving strategies as the cognitive intelligence, while solely rely on driver's own decision-makings. To pave the way, this work stepwise proposes a visual reasoning-based approach for presenting drivers with perceptual, predictive, and reasoning information onto AR-HUDs toward cognitive intelligence. Firstly, a Driving Scenario Knowledge Graph comprising many road elements and empirical knowledge is established appropriately. Then, by analyzing the video streams and images collected by an in-car visual camera, the driving scene can be perceived comprehensively, including 1) identifying road elements and 2) moving elements’ intention recognition. Afterwards, a graph-based driving scenario reasoning model, driving scenario-adaptive KAGNET, is built for achieving driving strategy recommendations. Moreover, the analyzed information is shown on the HUDs via pre-defined AR graphics to support drivers intuitively. A case study is given lastly to prove its feasibility. As an explorative study, some limitations and future work are emphasized to attract further study and open discussion in this area for pursuing the better implementation of AR-HUDs.
AB - Enabled by advanced data analytics and intelligent computing, augmented reality head-up displays (AR-HUDs) are appraised with a certain degree of intelligence towards an in-car assistance system providing more convenience for drivers and ensuring safer traffic. Nevertheless, current AR-HUDs systems fail to analyze perceptual results with recommended driving strategies as the cognitive intelligence, while solely rely on driver's own decision-makings. To pave the way, this work stepwise proposes a visual reasoning-based approach for presenting drivers with perceptual, predictive, and reasoning information onto AR-HUDs toward cognitive intelligence. Firstly, a Driving Scenario Knowledge Graph comprising many road elements and empirical knowledge is established appropriately. Then, by analyzing the video streams and images collected by an in-car visual camera, the driving scene can be perceived comprehensively, including 1) identifying road elements and 2) moving elements’ intention recognition. Afterwards, a graph-based driving scenario reasoning model, driving scenario-adaptive KAGNET, is built for achieving driving strategy recommendations. Moreover, the analyzed information is shown on the HUDs via pre-defined AR graphics to support drivers intuitively. A case study is given lastly to prove its feasibility. As an explorative study, some limitations and future work are emphasized to attract further study and open discussion in this area for pursuing the better implementation of AR-HUDs.
KW - Augmented reality
KW - Graph neural network
KW - Head-up displayVisual reasoning
KW - Smart traffic
KW - Visual reasoning
UR - http://www.scopus.com/inward/record.url?scp=85147245991&partnerID=8YFLogxK
U2 - 10.1016/j.aei.2023.101888
DO - 10.1016/j.aei.2023.101888
M3 - Journal article
AN - SCOPUS:85147245991
SN - 1474-0346
VL - 55
JO - Advanced Engineering Informatics
JF - Advanced Engineering Informatics
M1 - 101888
ER -