华南理工大学学报(自然科学版) ›› 2025, Vol. 53 ›› Issue (2): 48-57.doi: 10.12141/j.issn.1000-565X.240225

• 交通安全 • 上一篇    下一篇

基于改进的CycleGAN和YOLOv8联合雾天道路环境感知算法

岳永恒, 雷文朋   

  1. 东北林业大学 土木与交通学院,黑龙江 哈尔滨 150040
  • 收稿日期:2024-05-09 出版日期:2025-02-25 发布日期:2025-02-03
  • 作者简介:岳永恒(1973—),男,博士,副教授,主要从事交通安全及控制理论及应用研究。E-mail: yueyyh@126.com
  • 基金资助:
    国家自然科学基金项目(62173107);国家车辆事故深度调查体系项目(NAIS-ZL-ZHGL-2020018);黑龙江省重点研发计划项目(JD22A014)

Foggy Road Environment Perception Algorithm Based on an Improved CycleGAN and YOLOv8

YUE Yongheng, LEI Wenpeng   

  1. School of Civil Engineering and Transportation,Northeast Forestry University,Harbin 150040,Heilongjiang,China
  • Received:2024-05-09 Online:2025-02-25 Published:2025-02-03
  • About author:岳永恒(1973—),男,博士,副教授,主要从事交通安全及控制理论及应用研究。E-mail: yueyyh@126.com
  • Supported by:
    the National Natural Science Foundation of China(62173107);the National Automobile Accident In-Depth Investigation System Funding Project(NAIS-ZL-ZHGL-2020018);the Key R & D Program of Heilongjiang Province(JD22A014)

摘要:

针对极端雾霾天气条件下,智能车辆对道路环境感知识别精度降低的问题,提出了基于改进的CycleGAN和YOLOv8联合雾天环境感知算法。首先以CycleGAN算法为框架对图像进行去雾预处理,在生成器网络中引入自注意力机制提高网络的特征提取能力,同时为了减少与真实图像的色彩差异,引入自正则化颜色损失函数;其次,在目标检测部分,首先采用轻量化的GhostConv网络替换原主干网络,以降低计算量;而后,在颈部网络加入了GAM注意力机制,有效提高了网络对于全局信息的交互能力;最后,通过WIoU损失函数,减小低质样本所产生的有害梯度,提高模型的收敛速度。应用RESIDE数据集和BDD100k数据集对该算法进行实验验证。结果表明:去雾后图像与原图像的结构相似度为85%,相较于原CycleGAN算法和AODNet算法的峰值信噪比(PSNR)和结构相似性(SSIM)分别提高2.24 dB和15.4个百分点、2.5 dB和36.3个百分点。其中,改进的YOLOv8算法与原算法相比,其精确率、召回率和平均检测精度均值分别提升了2.5、1.8和1.1个百分点。实验结果验证了所提出算法的召回率和检测精度等方面优于传统算法,具有一定的实用价值。

关键词: 智能车辆, 环境感知, 图像去雾, CycleGAN, 目标检测, YOLOv8

Abstract:

In response to the issue of reduced road environment perception accuracy for intelligent vehicles under extreme haze conditions, this paper proposed a joint haze environment perception algorithm based on an improved CycleGAN and YOLOv8. Firstly, the CycleGAN algorithm was used as the framework for image defogging preprocessing. A self-attention mechanism was incorporated into the generator network to enhance the network’s feature extraction capability. Additionally, to minimize color discrepancies with real images, a self-regularized color loss function was introduced. Secondly, in the object detection phase, the lightweight GhostConv network was first used to replace the original backbone network, reducing computational complexity. Furthermore, the GAM attention mechanism was added to the neck network to effectively improve the network’s ability to interact with global information. Finally, the WIoU loss function was used to mitigate harmful gradients caused by low-quality samples, improving the model’s convergence speed. Experiments conducted on the RESIDE and BDD100k datasets validate the proposed algorithm. Results show that the structural similarity between dehazed and original images is 85%. Compared to the original CycleGAN algorithm and the AODNet algorithm, the proposed approach improves the peak signal-to-noise ratio (PSNR) by 2.24 dB and 2.5 dB, respectively, and the structural similarity index (SSIM) by 15.4% and 36.3%, respectively. Additionally, the improved YOLOv8 algorithm demonstrates enhancements over the original algorithm, with precision, recall, and mean average precision (mAP) increasing by 2.5%, 1.8%, and 1.1%, respectively. The experimental results confirm that the proposed algorithm outperforms traditional algorithms in terms of recall and detection accuracy, demonstrating its practical value

Key words: intelligent vehicle, environmental perception, image dehazing, CycleGAN, object detection, YOLOv8

中图分类号: