Home > Archive > 2019 > Volume 9 Number 3 (Jun. 2019) >
IJMLC 2019 Vol.9(3): 267-272 ISSN: 2010-3700
DOI: 10.18178/ijmlc.2019.9.3.797

Forget the Learning Rate, Decay Loss

Jiakai Wei

Abstract—In the usual deep neural network optimization process, the learning rate is the most important hyper parameter, which greatly affects the final convergence effect. The purpose of learning rate is to control the stepsize and gradually reduce the impact of noise on the network. In this paper, we will use a fixed learning rate with method of decaying loss to control the magnitude of the update. We used Image classification, Semantic segmentation, and GANs to verify this method. Experiments show that the loss decay strategy can greatly improve the performance of the model.

Index Terms—Deep learning, optimization.

Jiakai Wei is with the Hunan University of Technology, China (e-mail: 16408400236@stu.hut.edu.cn).

[PDF]

Cite: Jiakai Wei, "Forget the Learning Rate, Decay Loss," International Journal of Machine Learning and Computing vol. 9, no. 3, pp. 267-272, 2019.

General Information

  • E-ISSN: 2972-368X
  • Abbreviated Title: Int. J. Mach. Learn.
  • Frequency: Quaterly
  • DOI: 10.18178/IJML
  • Editor-in-Chief: Dr. Lin Huang
  • Executive Editor:  Ms. Cherry L. Chen
  • Abstracing/Indexing: Inspec (IET), Google Scholar, Crossref, ProQuest, Electronic Journals LibraryCNKI.
  • E-mail: ijml@ejournal.net


Article Metrics in Dimensions