华南理工大学学报(自然科学版) ›› 2025, Vol. 53 ›› Issue (1): 74-83.doi: 10.12141/j.issn.1000-565X.240172

• 电子、通信与自动控制 • 上一篇    下一篇

4D毫米波雷达点云与视觉图像自动外参配准方法

毕欣1, 翁才恩1, 王瑜2, 胡再刚3   

  1. 1.同济大学 汽车学院,上海 201804
    2.成都汽车产业研究院,四川 成都 610100
    3.成都天软信息技术有限公司,四川 成都 610041
  • 收稿日期:2024-04-10 出版日期:2025-01-25 发布日期:2025-01-02
  • 通信作者: 翁才恩(1990—),男,博士生,主要从事自动驾驶融合感知技术研究。 E-mail:wengcaien@tongji.edu.cn
  • 作者简介:毕欣(1980—),男,博士,研究员,主要从事自动驾驶感知融合与测评技术研究。E-mail: bixin@tongji.edu.cn
  • 基金资助:
    国家重点研发计划项目(2022YFE0117100);成都市重大科技创新项目(2021-YF08-00140-GX);广东省基础与应用基础研究项目(2021B1515120032)

An Autonomous Extrinsic Calibration Method for 4D Millimeter-Wave Radar Point Clouds and Visual Images

BI Xin1, WENG Caien1, WANG Yu2, HU Zaigang3   

  1. 1.School of Automotive Studies,Tongji University,Shanghai 201804,China
    2.Chengdu Automobile Industry Academy,Chengdu 610100,Sichuan,China
    3.Chengdu Skysoft Information Technology Co. ,Ltd. ,Chengdu 610041,Sichuan,China
  • Received:2024-04-10 Online:2025-01-25 Published:2025-01-02
  • Contact: 翁才恩(1990—),男,博士生,主要从事自动驾驶融合感知技术研究。 E-mail:wengcaien@tongji.edu.cn
  • About author:毕欣(1980—),男,博士,研究员,主要从事自动驾驶感知融合与测评技术研究。E-mail: bixin@tongji.edu.cn
  • Supported by:
    the National Key R & D Program of China(2022YFE0117100);the Major Science and Technology Innovation Project of Chengdu City(2021-YF08-00140-GX);the Basic and Applied Basic Research Foundation of Guangdong Province(2021B1515120032)

摘要:

随着自动驾驶技术的快速发展,环境感知系统对多传感器融合的需求日益增加。4维(4D)毫米波雷达因其在复杂天气和光照条件下的稳定性能而成为自动驾驶领域的重要传感器之一。尽管4D毫米波雷达通过增加俯仰角信息和提高点云密度改善了目标检测的精度,但其点云稀疏性和噪声问题限制了其独立应用。因此,4D毫米波雷达与视觉传感器的融合成为提升自动驾驶感知精度的关键。然而,传统的外参配准方法依赖繁琐的手动操作,难以满足高效自动化配准的需求。为解决这一问题,该文提出了一种基于标定板的4D毫米波雷达与视觉图像自动外参配准方法。该方法首先设计了包含ChArUco标记、红色圆环和角反射器的标定板,然后通过圆检测算法和角反射器检测算法自动提取配准点的图像坐标和雷达点云坐标。此外,还提出了一种通过3D Max和Unity仿真的方式实现配准数据采集与验证的方法。最后,通过实验比较直接线性变换(DLT)和外参配准(EC)2种方法的性能,评估配准精度。结果表明,所设计的标定板和自动配准算法能够有效地减少人工操作,并且在配准点个数较多时,EC方法具有更高的配准稳定性和配准精度。

关键词: 多传感器配准, 4D毫米波雷达, 视觉, 自动驾驶, 外参配准

Abstract:

With the rapid development of autonomous driving technology, the demand for multi-sensor fusion in environmental perception systems is increasing. Four-dimensional (4D) millimeter-wave radar has become one of the critical sensors in autonomous driving due to its stable performance under complex weather and lighting conditions. Although 4D millimeter-wave radar improves object detection accuracy by adding elevation angle information and increasing point cloud density, its sparse point clouds and noise issues limit its independent application. Therefore, the fusion of 4D millimeter-wave radar with vision sensors has become key to enhancing perception accuracy in autonomous driving. However, traditional extrinsic calibration methods rely on cumbersome manual operations, making it challenging to meet the requirements for efficient automated calibration. To address this issue, this study proposed an automated extrinsic calibration method for 4D millimeter-wave radar and visual images based on a calibration board. The method first designs a calibration board with ChArUco markers, red circular rings, and corner reflectors, and then automatically extracts image coordinates and radar point cloud coordinates of the calibration points using a circle detection algorithm and a corner reflector detection algorithm. Furthermore, a method for calibration data acquisition and validation was proposed using simulations in 3D Max and Unity. Finally, the performance of direct linear transformation (DLT) and extrinsic calibration (EC) methods is compared through experiments to evaluate calibration accuracy. Experimental results indicate that the designed calibration board and automated calibration algorithm effectively reduce manual operations, and the EC method demonstrates higher calibration stability and accuracy when more calibration points are involved.

Key words: multi-sensor calibration, 4D millimeter-wave radar, vision, autonomous driving, extrinsic calibration

中图分类号: