Cognitive Assitive Navigation for the Visualy Impaired

Indoor assistive navigation system plays an essential role for the independently mobility in unfamiliar environment for the Blind & Visually Impaired (BVI). It has been researched extensively in recent years with the fast evolution of mobile technologies, from applying robotics simultaneous localization and mapping (SLAM) approach, deploying infrastructure sensors to integrating GIS indoor map databases. Although these researches provide useful prototypes to help blind people with independently traveling, cognitive assistant is still in early stage in the era of deep learning for computer vision.


We propose a novel cognitive assistive indoor navigation system to provide blind people cognitive perception for the surroundings during the navigation journey based on deep learning computing. First indoor semantic map database is built to model the environment spatial context-aware information based on Google Tango visual positioning service (VPS), then a TinyYOLO (You-only-look-once) convolutional neural network (CNN) model is applied on the Tango Android phone for real-time moving person recognition and tracking, and finally scene understanding using CNN and long short-term memory (LSTM) network is performed on the Cloud Server. Experiments by blind subjects and blind-folded sighted subjects shows the system works effectively to guide the user to a destination and to provide ambient cognitive perception.

This experiment Demo video shows the deep learning based recognition and multi-box tracking using TensorFlow during the assistive navigation.

The work of this research has been summarized and published in:

  • B. Li, M. Budhai, B. Xiao, L. Yang, J. Xiao. Mobile Cognitive Indoor Assistive Navigation for Blind Persons. The 33rd CSUN Assistive Technology Conference. 2018