Abstract
Predicting visual attention facilitates an adaptive virtual museum environment and provides a context-aware and interactive user experience. Explorations toward development of a visual attention mechanism using eye-tracking data have so far been limited to 2D cases, and researchers are yet to approach this topic in a 3D virtual environment and from a spatiotemporal perspective. We present the first 3D Eye-tracking Dataset for Visual Attention modeling in a virtual Museum, known as the EDVAM. In addition, a deep learning model is devised and tested with the EDVAM to predict a user’s subsequent visual attention from previous eye movements. This work provides a reference for visual attention modeling and context-aware interaction in the context of virtual museums.
| Translated title of the contribution | EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum |
|---|---|
| Original language | Chinese (Simplified) |
| Pages (from-to) | 101-112 |
| Number of pages | 12 |
| Journal | Frontiers of Information Technology and Electronic Engineering |
| Volume | 23 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - Jan 2022 |
| Externally published | Yes |
Keywords
- Deep learning
- Eye-tracking datasets
- Gaze detection
- TP391
- Virtual museums
- Visual attention
ASJC Scopus subject areas
- Signal Processing
- Hardware and Architecture
- Computer Networks and Communications
- Electrical and Electronic Engineering