Abstract
It is always challenging for deaf and speech-impaired people to communicate with non-sign language users. A real-Time sign language recognition system using 3D motion sensors could lower the aforementioned communication barrier. However, most existing gesture recognition systems are adopting a single sensor framework, whose performance is susceptible to occlusions. In this paper, we proposed a real-Time multi-sensor recognition system for American sign language (ASL). Data collected from Leap Motion sensors are fused using multiple sensors data fusion (MSDF) and the recognition is performed using hidden Markov models (HMM). Experimental results demonstrate that the proposed system can deliver higher recognition accuracy over single-sensor systems. Due to its low implementation cost and higher accuracy, the proposed system can be widely deployed and bring conveniences to sign language users.
Original language | English |
---|---|
Title of host publication | Proceedings - 2015 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, CyberC 2015 |
Publisher | IEEE |
Pages | 411-414 |
Number of pages | 4 |
ISBN (Electronic) | 9781467391993 |
DOIs | |
Publication status | Published - 26 Oct 2015 |
Event | 7th International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, CyberC 2015 - Xi'an, China Duration: 17 Sept 2015 → 19 Sept 2015 |
Conference
Conference | 7th International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, CyberC 2015 |
---|---|
Country/Territory | China |
City | Xi'an |
Period | 17/09/15 → 19/09/15 |
Keywords
- American sign language
- depth sensors
- hidden Markov models
- Leap Motion sensor
- sensor fusion
- Sign language recognition
ASJC Scopus subject areas
- Computational Theory and Mathematics
- Computer Networks and Communications
- Artificial Intelligence