• Sept. 27, 2018: My US patent “Method, apparatus and computer program product for mapping and modeling a three dimensional structure” (Inventors: Bing Li and Rich Valde) was published online.

  • May 28, 2018: My paper on vision-based assistive navigation was published on IEEE Transactions on Mobile Computing! Project.


(2019). Vision-based Mobile Indoor Assistive Navigation Aid for Blind People. IEEE Transactions on Mobile Computing, PDF, News, Video.

(2018). Collaborative Mapping and Autonomous Parking for Multi-story Parking Garage. IEEE Transactions on Intelligent Transportation Systems, PDF.

(2018). Semantic Metric 3D Reconstruction for Concrete Inspection. International Conference on Computer Vision and Pattern Recognition (CVPR) Workshop, PDF, Code and Data, Video.

(2018). Mobile Cognitive Indoor Assistive Navigation for Blind Persons. The 33rd CSUN Assistive Technology Conference, PDF, Video.

(2017). Wall-climbing robot for non-destructive evaluation using impact-echo and metric learning SVM. Springer International Journal of Intelligent Robotics and Applications (IJIRA), PDF, Video.

(2017). A Robotic System Towards Concrete Structure Spalling And Crack Database. The IEEE/RSJ International Conference on Robotics and Biomimetics (ROBIO), PDF.

(2017). Deep Concrete Inspection Using Unmanned Aerial Vehicle Towards CSSC Database. The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Abstract, PDF, Video.

(2017). A Random Multi-Trajectory Generation Method for Online Emergency Threat Management (Analysis and Application in Path Planning Algorithm. Kinematics (Book Chapter), InTech Press, ISBN 978-953-51-3688-0, PDF.

(2017). An Assistive Indoor Navigation System for the Visually Impaired in Multi-Floor Environments. IEEE Int. Conf. on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Best Conference Paper Award, PDF, Video.

(2017). CCNY Smart Cane. IEEE Int. Conf. on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), PDF, Video, News.

(2016). ISANA: Wearable Context-Aware Indoor Assistive Navigation with Obstacle Avoidance for the Blind. European Conference on Computer Vision (ECCV), PDF, Video.

(2016). Guided Text Spotting for Assistive Blind Navigation in Unfamiliar Indoor Environments. The International Symposium on Visual Computing (ISVC), PDF.

(2016). Demo: Assisting Visually Impaired People Navigate Indoors. International Joint Conference on Artificial Intelligence (IJCAI), PDF.

(2015). Rise-Rover: A Wall-Climbing Robot with High Reliability and Load-Carrying Capacity. International Conference on Climbing and Walking Robots and Support Technologies for Mobile Machines (CLAWAR), Industrial Robot Innovation Award, PDF, Video.

(2015). Assisting the Blind to Avoid Obstacle: A Wearable Obstacle Stereo Feedback System based on 3D Detection. The IEEE International Conference on Robotics and Biomimetics (ROBIO), PDF.

(2015). Robotic Impact-echo Non-Destructive Evaluation based on FFT and SVM. World Congress on Intelligent Control and Automation (WCICA), PDF.

(2015). A SLAM Based Semantic Indoor Navigation System for Visually Impaired Users. The IEEE International Conference on Systems, Man, and Cybernetics (SMC), PDF, Video1, Video2.

(2015). An Assistive Navigation Framework for the Visually Impaired. IEEE Transactions on Human-Machine Systems, PDF.

(2008). 3D Reconstruction Embedded System Based on Laser Scanner for Mobile Robot. The IEEE Conference on Industrial Electronics and Applications (ICIEA), PDF.


Intelligent Situation Awareness and Navigation Aid for the Visualy Impaired

Indoor assitive navigation demonstrated at the U.S. DOT Headquarters

Cognitive Assitive Navigation for the Visualy Impaired

Using deep learning approaches for object tracking and scene understanding

Deep Learning based Semantic 3D Inspection

Field test at bridge-tunnel vertical surface at Riverside Dr, New York

Wall-climbing Robot for Non-Destructive Evaluation (NDE)

Wall-climbing robot for non-destructive evaluation using impact-echo and metric learning SVM

Collaborative Mapping and Autonomous Parking

3D Lidar mapping and autonomous indoor parking solution


I am teaching below graduate-level class currently:

AuE 8930 Perception and Intelligence

Mon. and Wed.: 9:15-10:30 AM (start from Jan. 9)

Room: 401 CGEC, Clemson University

This course will introduce the fundamental technologies for autonomous vehicle sensors, perception and machine learning, from electromagnetic spectrum characteristics and signal acquisition, vehicle extrospective sensor data analysis, perspective geometry models, image and point cloud processing, to machine/deep learning approaches. We will also have hands on programing experience in vehicle perception problems through homeworks and class projects.

COURSE OBJECTIVES (to provide a fundamental understanding of:)

  • Electromagnetic spectrum characteristics and Radar signal processing.

  • The mechanism of human vision: eyes, visual brain, depth and color.

  • Visual perception using image processing and machine learning recognition.

  • 3D LiDAR and point cloud data representation and processing.

  • Deep learning for vehicle perceptual sensor data processing.

I enjoy and had quite a bit of experience teaching as Adjunct Lecturer and Teaching Assistant previously in The City College, The City University of New York.