Intelligent Situation Awareness and Navigation Aid for the Visualy Impaired

This research presents a new holistic vision-based mobile assistive navigation system to help blind and visually impaired people with indoor independent travel. The system detects dynamic obstacles and adjusts path planning in real-time to improve navigation safety. First, we develop an indoor map editor to parse geometric information from architectural models and generate a semantic map consisting of a global 2D traversable grid map layer and context-aware layers. By leveraging the visual positioning service (VPS) within the Google Tango device, we design a map alignment algorithm to bridge the visual area description file (ADF) and semantic map to achieve semantic localization. Using the on-board RGB-D camera, we develop an efficient obstacle detection and avoidance approach based on a time-stamped map Kalman filter (TSM-KF) algorithm. A multi-modal human-machine interface (HMI) is designed with speech-audio interaction and robust haptic interaction through an electronic SmartCane. Finally, field experiments by blindfolded and blind subjects demonstrate that the proposed system provides an effective tool to help blind individuals with indoor navigation and wayfinding.


Real-time obstacle avoidance during assistive navigation:


This research was successfully demonstrated at the U.S. Department of Transportation Headquarters (DOT) at Washington, D.C. in March 2016, News.

The work of this research has been summarized and published in:

  • B. Li, J. P. Muñoz, X. Rong, Q. Chen, J. Xiao, Y. Tian, A. Arditi, and M. Yousuf. Vision-based Mobile Indoor Assistive Navigation Aid for Blind People. IEEE Transactions on Mobile Computing. 2018

  • B. Li, J. P. Muñoz, X. Rong, J. Xiao, Y. Tian, A. Arditi. ISANA: Wearable Context-Aware Indoor Assistive Navigation with Obstacle Avoidance for the Blind. European Conference on Computer Vision (ECCV). 2016