计算机科学与技术

一种基于 PSMNet 改进的立体匹配算法

展开
  • 武汉理工大学 现代汽车零部件技术湖北省重点实验室∥汽车零部件技术湖北省协同创新中心∥湖北省新能源与智能网联车工程技术研究中心,湖北 武汉 430070
刘建国 (1971-),男,博士,副教授,主要从事机器视觉、智能驾驶研究。

收稿日期: 2019-06-27

  修回日期: 2019-08-06

  网络出版日期: 2019-12-01

基金资助

国家自然科学基金资助项目 (51975434); 新能源汽车科学与关键技术学科创新引智基地资助项目(B17034); 武汉理工大学研究生优秀学位论文培育项目 (2018-YS-033)

Improved Stereo Matching Algorithm Based on PSMNet

Expand
  • Hubei Key Laboratory of Advanced Technology for Automotive Components∥Hubei Collaborative Innovation Center for Automotive Components Technology∥Hubei Research Center for New Energy & Intelligent Connected Vehicle,Wuhan University of Technology,Wuhan 430070,Hubei,China
刘建国 (1971-),男,博士,副教授,主要从事机器视觉、智能驾驶研究。

Received date: 2019-06-27

  Revised date: 2019-08-06

  Online published: 2019-12-01

Supported by

Supported by the National Natural Science Foundation of China (51975434)

摘要

为了解决双目视觉中的立体匹配问题、减少立体匹配网络的参数数量、降低算法的计算复杂度、提高算法的实用性。在 PSMNet 立体匹配网络的基础上进行改进,提出了一种具备浅层结构与宽阔视野的立体匹配算法——SWNet。浅层结构表示网络层数更少、参数更少、处理速度更快; 宽阔视野则表示网络的感受野更宽广,能够获取并保留更多的空间信息。SWNet 由特征提取、3D 卷积和视差回归 3 个部分构成。在特征提取部分,引入了深色空间金字塔结构 (Atrous Spatial Pyramid Pool,ASPP),用于提取多尺度的空间特征信息,设计了特征融合模块,将不同尺度的特征信息有效地融合起来以构建匹配代价卷; 3D 卷积神经网络利用堆叠的编码解码结构进一步对匹配代价卷进行规则化处理,获得不同视差条件下特征点之间的对应关系; 最后,采用回归的方式得到视差图。SWNet 在 SceneFlow 和 KITTI 2015 两个公开的数据集上均取得了优异的表现,与参考算法 PSMNet 相比,参数数量下降了 48. 9%,且误匹配率仅有 2. 24%。

本文引用格式

刘建国, 冯云剑, 纪郭, 等 . 一种基于 PSMNet 改进的立体匹配算法[J]. 华南理工大学学报(自然科学版), 2020 , 48(1) : 60 -69,83 . DOI: 10.12141/j.issn.1000-565X.190388

Abstract

Based on PSMNet stereo matching network,an improved stereo matching algorithm with shallow struc-ture and wide receptive field -SWNet was proposed,in order to solve the stereo matching problem in binocular vi-sion,reduce the number of parameters of the stereo matching network,reduce the computational complexity of the algorithm,and improve the practicability of the algorithm. The shallow structure means fewer layers,fewer pa-rameters and faster processing speed,while wide receptive field means that the network is more receptive and can acquire and retain more spatial information. SWNet consists of three parts: feature extraction,3D convolution and disparity regression. In the aspect of feature extraction,Atrous Spatial Pyramid Pool (ASPP) was introduced,which was used to extract multi-scale feature information. Feature fusion module was designed to fuse multi-scale feature information and build matching cost volume. The 3D convolutional neural network use the stack encoding
and decoding structure to further regularize the matching cost volume and obtain the corresponding relationship be-tween the feature points under different disparity conditions. Finally,the disparity map was obtained by regres-sion. SWNet performed well on both SceneFlow and KITTI 2015 public datasets,with a 48. 9% reduction in the number of parameters and a 2. 24% mismatching rate compared to the reference algorithm PSMNet.
文章导航

/