华南理工大学学报(自然科学版) ›› 2025, Vol. 53 ›› Issue (9): 22-30.doi: 10.12141/j.issn.1000-565X.240609

• 计算机科学与技术 • 上一篇    下一篇

基于深度学习的车道线检测方法

岳永恒   赵志浩   

  1. 东北林业大学 土木与交通学院,黑龙江 哈尔滨 150040

  • 出版日期:2025-09-25 发布日期:2025-04-27

A Deep Learning Approach for Lane Detection

YUE Yongheng   ZHAO Zhihao   

  1. School of Civil Engineering and Transportation, Northeast Forestry University, Harbin 150040, Heilongjiang, China

  • Online:2025-09-25 Published:2025-04-27

摘要: 针对智能车辆在复杂场景下的车道线检测准确性问题,本文提出了一种融合多尺度空间注意力机制的PANet车道线检测算法。该算法引用预锚框UFLD车道线检测模型,并结合深度可分离卷积的特征金字塔增强模块PANet,实现图像的多尺度特征提取。此外,网络框架中设计了多尺度空间注意力模块,且引入SimAM轻量级注意力机制,以增强对目标特征的聚焦能力。而后,设计了自适应特征融合模块,通过智能调整不同尺度特征图的融合权重,对PANet输出的特征图进行跨尺度融合,从而有效提升网络对复杂特征的提取能力。最后,应用Tusimple数据集检测证明,本文算法检测精度为96.84%,较原算法提升了1.03%,优于传统的主流算法;而基于CULane数据集的九种场景的检测证明,本文算法综合F1值为72.74%优于传统的主流算法,较原算法提升了4.34%,尤其在强光、阴影等极端场景下检测性能提升较大,充分说明了本文检测方法在复杂场景下具有较好的检测能力。此外,实时性测试显示,模型推理速度达到118FPS,满足智能车辆的实时性需求。

关键词: 车道线检测, 深度学习, 多尺度空间注意力机制, 自适应特征融合

Abstract:

Aiming at the problem of lane line detection accuracy of intelligent vehicles in complex scenes, this paper proposes a PANet lane line detection algorithm that incorporates a multi-scale spatial attention mechanism. The algorithm references the pre-anchored frame UFLD lane line detection model and combines the feature pyramid enhancement module PANet with depth-separable convolution to realize multi-scale feature extraction from images. In addition, a multi-scale spatial attention module is designed in the network framework and a SimAM lightweight attention mechanism is introduced to enhance the focusing ability on target features. After that, an adaptive feature fusion module is designed to perform cross-scale fusion of feature maps output from PANet by intelligently adjusting the fusion weights of feature maps at different scales, so as to effectively enhance the network's ability to extract complex features. Finally, the application of Tusimple dataset detection proves that the detection accuracy of this paper's algorithm is 96.84%, which is 1.03% better than the original algorithm and superior to the traditional mainstream algorithm; and based on the CULane dataset's detection of the nine scenarios proves that the comprehensive F1 value of this paper's algorithm is 72.74% better than the traditional mainstream algorithm, which is 4.34% better than the original algorithm, especially under the bright light, shadows, and other extreme scenes. Especially in strong light, shadow and other extreme scenes, the detection performance improvement is larger, which fully demonstrates that the detection method in this paper has better detection ability in complex scenes. In addition, the real-time test shows that the model inference speed reaches 118FPS, which meets the real-time demand of intelligent vehicles.

Key words: lane detection, deep learning, multi-scale spatial attention module, adaptively spatial feature fusion