SenseViewer: A unified rendering interface of visual and haptic cues in medical images

Bing Nan Li, Xiang Shan, Jing Qin, Weimin Huang, Ning An

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

2 Citations (Scopus)

Abstract

We propose a new unified rendering interface - SenseViewer - that allows users to annotate and label visual, auditory and in particular haptic cues to medical images. This new interface may support pre-operation planning, surgical training and medical robot guidance better. Users can touch and feel virtual organs and tissues with haptics. The visual and auditory cues are helpful to define optimal paths for planning and guidance. SenseViewer is one of the earliest dedicated interfaces for rendering visual and haptic cues in medical images. Moreover, it employs magnetic resonance elastography for quantitative soft tissue viscoelasticity that makes haptic cues more realistic. SenseViewer will be further enhanced and integrated into our innovative Image-guided Robot Assisted Training (IRAS) system.
Original languageEnglish
Title of host publication2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013
PublisherIEEE Computer Society
Pages2209-2212
Number of pages4
DOIs
Publication statusPublished - 1 Jan 2013
Externally publishedYes
Event2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013 - Shenzhen, China
Duration: 12 Dec 201314 Dec 2013

Conference

Conference2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013
Country/TerritoryChina
CityShenzhen
Period12/12/1314/12/13

Keywords

  • Computer-aided surgical training
  • cue rendering
  • haptic cues
  • magnetic resonance elastography

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Biotechnology

Cite this