Live demonstration: A HMM-based real-time sign language recognition system with multiple depth sensors

Kai Yin Fok, Chi Tsun Cheng, Nuwan Ganganath

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

7 Citations (Scopus)


Automatic sign language recognition plays an important role in communications for sign language users. Most existing sign language recognition systems use single sensor input. However, such systems may fail to recognize hand gestures correctly due to occluded regions of hand gestures. In this work, we propose a novel system for real-time recognition of the digits in American Sign Language (ASL) [1]. The proposed system [2] utilizes two Leap Motion sensors [3] to capture hand gestures from different angles. Sensory data are preprocessed using a multi-sensor data fusion approach and ASL digits are recognized in real-time from the fused data using Hidden Markov models (HMM) [4]. Experimental results of the proposed sign language recognition system demonstrate its improved performance over single sensor systems. With a low implementation cost and a high recognition accuracy, the proposed system can be widely adopted in many real world applications and bring conveniences to world-wide ASL users.
Original languageEnglish
Title of host publication2015 IEEE International Symposium on Circuits and Systems, ISCAS 2015
Number of pages1
ISBN (Electronic)9781479983919
Publication statusPublished - 1 Jan 2015
EventIEEE International Symposium on Circuits and Systems, ISCAS 2015 - Lisbon, Portugal
Duration: 24 May 201527 May 2015


ConferenceIEEE International Symposium on Circuits and Systems, ISCAS 2015

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this