The position of blind lanes must be correctly determined in order for blind people to travel safely. Aiming at the low accuracy and slow speed of traditional blind lanes image segmentation algorithms, a semantic segmentation method based on SegNet and MobileNetV3 is proposed. The main idea is to replace the coding part of the original SegNet model with the feature extraction part of MobileNetV3 and remove the pooling layer. Blind lanes images were collected through online search and self-shooting, and then the data were manually marked by LabelMe software and trained on TensorFlow deep learning framework. The experimental results show that the improved model has high segmentation accuracy and recognition speed. The pixel accuracy of blind lanes segmentation is 98.21%, the mean intersection over union is 96.29%, and the average time for processing a 416 × 416 image is 0.057 s, which meets the real-time requirements of the blind guidance system. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image segmentation
Semantics
Education and training
Convolution
Feature extraction
Deep learning
Image processing