Journal of South China University of Technology (Natural Science Edition) ›› 2021, Vol. 49 ›› Issue (6): 66-76.doi: 10.12141/j.issn.1000-565X.200400

Special Issue: 2021年计算机科学与技术

• Computer Science & Technology • Previous Articles     Next Articles

2D Footprint Classification Based on Multiple-Module Relation Network

ZHANG YanWU LuotianWANG NianMENG ShulinHU FeiranLU Xilong2   

  1. 1.School of Electronics and Information Engineering, Anhui University, Hefei 230601, Anhui, China;2.Institute of Forensic 
    Science, Ministry of Public Security, Beijing 100038, China
  • Received:2020-07-13 Revised:2021-01-12 Online:2021-06-25 Published:2021-06-01
  • Contact: 王年(1966-),男,博士,教授,主要从事计算机视觉与模式识别研究。 E-mail:wn_xlb@ahu.edu.cn
  • About author:张艳(1982-),女,博士,副教授,主要从事生物图像分析与处理研究。E-mail:zhangyan@ahu.edu.cn
  • Supported by:
    Supported by the National Key Research and Development Program of China(2018YFC0807302) and the National Natural Science Foundation of China(61772032)

Abstract: Due to the limited samples of footprint data and its high similarity between types and large gap within a type, there is no effective method to express footprint data and classify footprint. In order to solve the problem of bimodal footprint classification, the multiple-module relation network (MulRN) based on few-shot learning was proposed in this paper. Multiple modules were applied in the algorithm to improve the ability of extraction and mea-surement of characters. Inception module and MRFB module which possess a multi-branch structure were used to improve the character extraction ability. Spatial Attention Module (SAM) and Channel Attention Module (CAM) were adopted to extract the character of footprint with high discrimination for accurate classification. Also, experiments were carried out on few-shot data sets such as miniImageNet, Omniglot and bimodal 2D footprint data sets. Experimental results show that the proposed method is effective for few-shot data sets and bimodal 2D footprint data sets. It is worth mentioning that the accuracy of 5-way 5-shot experiment on bimodal data sets of right foot is up to 95.41%.

Key words: few-shot learning, multiple-module relational network, 2D footprint classification, multi-branch mo-dule, attention module, character extraction ability, character measurement ability

CLC Number: