Convolutional Neural Networks with Dynamic Regularization

Yi Wang, Zhen Peng Bian, Junhui Hou, Lap Pui Chau

Research output: Journal article publicationJournal articleAcademic researchpeer-review

19 Citations (Scopus)


Regularization is commonly used for alleviating overfitting in machine learning. For convolutional neural networks (CNNs), regularization methods, such as DropBlock and Shake-Shake, have illustrated the improvement in the generalization performance. However, these methods lack a self-adaptive ability throughout training. That is, the regularization strength is fixed to a predefined schedule, and manual adjustments are required to adapt to various network architectures. In this article, we propose a dynamic regularization method for CNNs. Specifically, we model the regularization strength as a function of the training loss. According to the change of the training loss, our method can dynamically adjust the regularization strength in the training procedure, thereby balancing the underfitting and overfitting of CNNs. With dynamic regularization, a large-scale model is automatically regularized by the strong perturbation, and vice versa. Experimental results show that the proposed method can improve the generalization capability on off-the-shelf network architectures and outperform state-of-the-art regularization methods.

Original languageEnglish
Article number9110754
Pages (from-to)2299-2304
Number of pages6
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number5
Publication statusPublished - May 2021
Externally publishedYes


  • Convolutional neural network (CNN)
  • generalization
  • image classification
  • overfitting
  • regularization

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'Convolutional Neural Networks with Dynamic Regularization'. Together they form a unique fingerprint.

Cite this