Abstract
Indoor localization with high accuracy and efficiency has attracted much attention. Due to visible light communication (VLC), the LED lights in buildings, once modulated, hold great potential to be ubiquitous indoor localization infrastructure. However, this entails retrofitting the lighting system and is hence costly in wide adoption. To alleviate this problem, we propose to exploit modulated LEDs and existing unmodulated lights as landmarks. On this basis, we present a novel inertial-aided visible light positioning (VLP) system for lightweight indoor localization on resource-constrained platforms, such as service robots and mobile devices. With blob detection, tracking, and VLC decoding on rolling-shutter camera images, a visual front end extracts two types of blob features, i.e., mapped landmarks (MLs) and opportunistic features (OFs). These are tightly fused with inertial measurements in a stochastic cloning sliding-window extended Kalman filter (EKF) for localization. We evaluate the system by extensive experiments. The results show that it can provide lightweight, accurate, and robust global pose estimates in real time. Compared with our previous ML-only inertial-aided VLP solution, the proposed system has superior performance in terms of positional accuracy and robustness under challenging light configurations, such as sparse ML/OF distribution.
Original language | English |
---|---|
Journal | IEEE Transactions on Automation Science and Engineering |
DOIs | |
Publication status | Accepted/In press - 2021 |
Keywords
- Aided inertial navigation
- Cameras
- extended Kalman filter (EKF)
- Feature extraction
- indoor localization
- Light emitting diodes
- Lighting
- Location awareness
- sensor fusion
- service robots
- Visible light communication
- visible light communication (VLC)
- visible light positioning (VLP).
- Visualization
ASJC Scopus subject areas
- Control and Systems Engineering
- Electrical and Electronic Engineering