With modern science and technology development, vehicles are equipped with intelligent driver assistant systems, of which lane detection is a key function. These complex detection structures (either wide or deep) are investigated to boost the accuracy and overcome the challenges in complicated scenarios. However, the computation and memory storage cost will increase sharply, and the response time will also increase. For resource-constrained devices, lane detection networks with a low cost and short inference time should be implemented. To get more accurate lane detection results, the large (deep and wide) detection structure is framed for high-dimensional and highly robust features, and deep supervision loss is applied on different resolutions and stages. Despite the high-precision advantage, the large detection network cannot be used for embedded devices directly because of the demand for memory and computation. To make the network thinner and lighter, a general training strategy, called self-knowledge distillation (SKD), is proposed. It is different from classical knowledge distillation; there are no independent teacher-student networks, and the knowledge is distilled itself. To evaluate more comprehensively and precisely, a new lane data set is collected. The Caltech Lane date set and TuSimple lane data set are also used for evaluation. Experiments further prove that a small student network and large teacher network have a similar detection accuracy via SKD, and the student network has a shorter inference time and lower memory usage. Thus it can be applied for resource-limited devices flexibly. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 2 scholarly publications.
Network architectures
Video
Image segmentation
Image resolution
Intelligence systems
Roads
Convolution