Abstract—In this paper, we propose vehicle detection and classification in a real road environment using a modified and improved AlexNet. Among the various challenges faced, the problem of poor robustness in extracting vehicle candidate regions through a single feature is solved using the YOLO deep learning series algorithm used to propose potential regions and to further improve the speed of detection. For this, the lightweight network Yolov2-tiny is chosen as the location network. In the training process, anchor box clustering is performed based on the ground truth of the training set, which improves its performance on the specific dataset. The low classification accuracy problem after template-based feature extraction is solved using the optimal feature description extracted through convolution neural network learning. Moreover, based on AlexNet, through adjusting parameters, an improved algorithm was proposed whose model size is smaller and classification speed is faster than the original AlexNet. Spatial Pyramid Pooling (SPP) is added to the vehicle classification network which solves the problem of low accuracy due to image distortion caused by image resizing. By combining CNN with SVM and normalizing features in SVM, the generalization ability of the model was improved. Experiments show that our method has a better performance in vehicle detection and type classification.
Index Terms—Vehicle detection, vehicle classification, Yolov2-tiny, AlexNet, spatial pyramid pooling, CNN, and SVM.
The authors are with the Faculty of Technology and Science, Department of Computer science, Tokushima University, Japan (e-mail: karungaru@tokushima-u.ac.jp).
Cite: Stephen Karungaru, Lyu Dongyang, and Kenji Terada, "Vehicle Detection and Type Classification Based on CNN-SVM," International Journal of Machine Learning and Computing vol. 11, no. 4, pp. 304-310, 2021.
Copyright © 2021 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).