Abstract—Binary neural networks (BNNs) have drawn much attention recently because they possess the most promising techniques to meet the desired requirements for memory footprint and inference speed. However, the poor representation inherent with their only two possible values (-1 and +1) and the training process involving quantization are important factors that causes BNNs to suffer from severe intrinsic instability of error convergence, which leads to an increase of their standard deviations and high prediction errors. In this work, a new training procedure with relaxed quantization on both weights and activations is proposed to address the above issues, uniquely without incurring any excessive costs. To the best of our knowledge, it is the first time to show through the experimental results that the proposed method with relaxed quantization on both weights and activations can reduce the error and its standard deviation by 1.71% and 13.08% on CIFAR-10 with ResNet-20 network, respectively, compared to the conventional BNN baseline. To compare the proposed method with conventional method, the additional experiment results on CIFAR-10/100 by 8-layer ResNet-like network and ResNet-32 are also provided.
Index Terms—Binary neural network, quantization algorithm, machine learning, relaxation.
J. Xi is with the Department of Computer Science and Engineering, Fukuoka Institute of Technology, Fukuoka, Japan.
H. Yamauchi is with the Department of Computer Science and Engineering, Fukuoka Institute of Technology, Fukuoka, Japan.
Cite: Jiazhen Xi and Hiroyuki Yamauchi, "Relaxed Training Procedure for a Binary Neural Network," International Journal of Machine Learning vol. 13, no. 1, pp. 7-12, 2023.Copyright @ 2023 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).