EOS: An efficient obstacle segmentation for blind guiding

Yinan Ma, Qi Xu, Yue Wang, Jing Wu*, Chengnian Long*, Yi-Bing Lin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Achieving high accuracy of blind road condition recognition in real-time is important for helping visually impaired people sense the surrounding environment. However, existing systems are mainly designed based on general objects detection (pedestrians, vehicles, crosswalks, etc.), ignoring the safety-critical objects such as obstacles (boxes, balls, etc.) failing on the walking areas. To tackle this issue, we construct an efficient obstacle segmentation (EOS) based system with a dedicated neural network E-BiSeNet, which is capable of segmenting blind roads, performing real-time and accurate obstacle avoidance to assist people walking more safely. Firstly, E-BiSeNet rethinks the structure redundancy in network depth and computation expenses in feature aggregation, which can be readily deployed on portable GPUs. Secondly, a simple post-processing scheme max logit (ML) based on the pretrained network segmentation outputs is introduced to locate unexpected on-road obstacles. Our “E-BiSeNet +ML” model outperforms state-of-the-art methods on both real-world and synthetic datasets. Through various experiments conducted in outdoor scenarios, the feasibility and reliability of the EOS have been extensively verified.
Original languageAmerican English
Pages (from-to)117
Number of pages128
JournalFuture Generation Computer Systems
Volume140
StatePublished - Mar 2023

Keywords

  • Blind guiding
  • real-time semantic segmentation
  • Post-processing
  • Obstacle detection

Fingerprint

Dive into the research topics of 'EOS: An efficient obstacle segmentation for blind guiding'. Together they form a unique fingerprint.

Cite this