Journal of South China University of Technology (Natural Science Edition) ›› 2021, Vol. 49 ›› Issue (6): 77-87,99.doi: 10.12141/j.issn.1000-565X.200430

Special Issue: 2021年计算机科学与技术

• Computer Science & Technology • Previous Articles     Next Articles

Stereo Matching Network Based on Multi-Stage Fusion and Recurrent Aggregation

ZHANG Ruifeng REN Guoming LI Qiang DUAN Ziyang   

  1. 1. School of Microelectronics, Tianjin University, Tianjin 300072, China;2. The 53th Research Institute of China 
    Electronics Technology Group Corporation, Tianjin 300300, China
  • Received:2020-07-24 Revised:2020-11-16 Online:2021-06-25 Published:2021-06-01
  • Contact: 张瑞峰(1974-),男,博士,副教授,主要从事机器视觉研究。 E-mail:zhangruifeng@tju.edu.cn
  • About author:张瑞峰(1974-),男,博士,副教授,主要从事机器视觉研究。
  • Supported by:
    Supported by the National Natural Science Foundation of China(61471263) and the Tianjin Municipal Natural Science Foundation(16JCZDJC31100)

Abstract: Aiming at the poor matching effect of ill-conditioned regions and excessive model parameters in the stereo matching network based on deep learning, an end-to-end stereo matching network based on multi-level feature fusion and recurrent cost aggregation(MFRANet)was proposed. Firstly, in order to take into account both the low-level detail information and high-level semantic information of the image, a multi-stage feature fusion module, which uses a phased and step-by-step feature fusion strategy to effectively fuse multi-level and multi-scale features, was proposed. Secondly, a recurrent mechanism was proposed in the cost aggregation stage to optimize the aggregation of the matching cost volume in a recurrent manner, and it can improve the aggregation effect while avoid introducing too many parameters. Finally, the disparity calculation module based on the Soft Argmin algorithm was used to calculate the image disparity. And through the two public datasets of KITTI 2012/2015 and SceneFlow, the network was trained and tested, and a comparative study with other end-to-end stereo matching networks was caaried out. Experimental results show that, for the two public datasets of SceneFlow and KITTI 2015, MFRANet has more accurate matching results than other end-to-end stereo matching networks; for the SceneFlow dataset, the end-point error is reduced to 0.92 pixels; for the KITTI 2015 dataset, the mismatching rate is reduced to 2.21%.

Key words: end-to-end stereo matching network, multi-stage feature fusion, recurrent cost aggregation, end-point error, mismatching rate

CLC Number: