Abstract
We propose a new unified rendering interface - SenseViewer - that allows users to annotate and label visual, auditory and in particular haptic cues to medical images. This new interface may support pre-operation planning, surgical training and medical robot guidance better. Users can touch and feel virtual organs and tissues with haptics. The visual and auditory cues are helpful to define optimal paths for planning and guidance. SenseViewer is one of the earliest dedicated interfaces for rendering visual and haptic cues in medical images. Moreover, it employs magnetic resonance elastography for quantitative soft tissue viscoelasticity that makes haptic cues more realistic. SenseViewer will be further enhanced and integrated into our innovative Image-guided Robot Assisted Training (IRAS) system.
Original language | English |
---|---|
Title of host publication | 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013 |
Publisher | IEEE Computer Society |
Pages | 2209-2212 |
Number of pages | 4 |
DOIs | |
Publication status | Published - 1 Jan 2013 |
Externally published | Yes |
Event | 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013 - Shenzhen, China Duration: 12 Dec 2013 → 14 Dec 2013 |
Conference
Conference | 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013 |
---|---|
Country/Territory | China |
City | Shenzhen |
Period | 12/12/13 → 14/12/13 |
Keywords
- Computer-aided surgical training
- cue rendering
- haptic cues
- magnetic resonance elastography
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Science Applications
- Biotechnology