EDVAM:用于虚拟博物馆视觉注意建模的三维眼动数据集

Translated title of the contribution: EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum

Yunzhan Zhou, Tian Feng, Shihui Shuai, Xiangdong Li, Lingyun Sun, Henry Been Lirn Duh

Research output: Journal article publicationJournal articleAcademic researchpeer-review

12 Citations (Scopus)

Abstract

Predicting visual attention facilitates an adaptive virtual museum environment and provides a context-aware and interactive user experience. Explorations toward development of a visual attention mechanism using eye-tracking data have so far been limited to 2D cases, and researchers are yet to approach this topic in a 3D virtual environment and from a spatiotemporal perspective. We present the first 3D Eye-tracking Dataset for Visual Attention modeling in a virtual Museum, known as the EDVAM. In addition, a deep learning model is devised and tested with the EDVAM to predict a user’s subsequent visual attention from previous eye movements. This work provides a reference for visual attention modeling and context-aware interaction in the context of virtual museums.

Translated title of the contributionEDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum
Original languageChinese (Simplified)
Pages (from-to)101-112
Number of pages12
JournalFrontiers of Information Technology and Electronic Engineering
Volume23
Issue number1
DOIs
Publication statusPublished - Jan 2022
Externally publishedYes

Keywords

  • Deep learning
  • Eye-tracking datasets
  • Gaze detection
  • TP391
  • Virtual museums
  • Visual attention

ASJC Scopus subject areas

  • Signal Processing
  • Hardware and Architecture
  • Computer Networks and Communications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'EDVAM: a 3D eye-tracking dataset for visual attention modeling in a virtual museum'. Together they form a unique fingerprint.

Cite this