Journal of South China University of Technology(Natural Science Edition) ›› 2022, Vol. 50 ›› Issue (6): 71-79,90.doi: 10.12141/j.issn.1000-565X.210404

Special Issue: 2022年电子、通信与自动控制

• Electronics, Communication & Automation Technology • Previous Articles     Next Articles

Incremental learning based on neuron regularization and resource releasing

MO Jianwen  ZHU Yanqiao  YUAN Hua  LIN Leping  HUANG Shengyang   

  1. School of Information and Communication,Guilin University of Electronic Technology,Guilin 541004,Guangxi,China
  • Received:2021-06-21 Revised:2021-09-13 Online:2022-06-25 Published:2021-09-24
  • Contact: 袁华 (1975-),男,硕士,讲师,主要从事模式识别、深度学习研究 E-mail:16020158@ qq. com
  • About author:莫建文 (1972-),男,博士,副教授,主要从事图像识别、人工智能研究
  • Supported by:
    Supported by the National Natural Science Foundation of China (62001133,61967005) and the Guangxi Natu-
    ral Science Foundation (2017GXNSFBA198212)

Abstract: Aimimg at the catastrophic forgetting problem caused by the image classification of deep learning systems in an incremental scene. A incremental learning method which based on neuron regularization and resource releasing mechanism was proposed. This method is based on the framework of Bayesian neural network. Firstly, grouping the input weights by neurons and restrict the standard deviation of weights to the same value according to the groups. Then in the training process, the corresponding strength regularization was performed for the weights of each group according to the standard deviation after unification. Finally, in order to improve model's continuous learning ability, a resource releasing mechanism was proposed. This mechanism maintains model's learning ability by guiding the model to selectively dilute the regularization strengths of some weights. Experiments on several common datasets show that the proposed method can explore the continuous learning capability of the model more effectively, and a better model can be learned even in a fixed capacity environment.

Key words: compressed sensing, deep learning, multi-hypothesis prediction, adaptive hypothesis weight,
multi-frame reference reconstruction

CLC Number: