All Issue

2026 Vol.18, Issue 1 Preview Page
31 March 2026. pp. 29-46
Abstract
References
1

Alanazi, F., 2023, “Electric Vehicles: Benefits, Challenges, and Potential Solutions for Widespread Adaptation,” Applied Sciences, Vol. 13, No. 10, 6016.

10.3390/app13106016
2

Thirunavukkarasu, S., Karthick, K., Aruna, S.K.., Manikandan, R., and Safran, M., 2024, “Optimized Fault Classification in Electric Vehicle Drive Motors Using Advanced Machine Learning and Data Transformation Techniques”, Processes, Vol. 12, No. 12, 2648.

10.3390/pr12122648
3

He, H., Zhou, N., Guo, J., Zhang, Z., Lu, B., and Sun, C., 2018, “Tolerance analysis of electrified vehicles on the motor demagnetization fault: From an energy perspective,” Vol. 227, pp. 239~248.

10.1016/j.apenergy.2017.08.226
4

Yu, H., and Liu, Z., 2012, “Fault Analysis and Fault-Tolerant Control of Electric Motor Drive System in HEV,” 2012 Fifth International Conference on Intelligent Computation Technology and Automation, Zhangjiajie, China, 2012, pp. 177~180.

10.1109/ICICTA.2012.51
5

Kumar, P., Prince, Sinha, A. K., Kim, H., 2024, “Electric Vehicle Motor Fault Detection with Improved Recurrent 1D Convolutional Neural Network,” Mathematics, Vol. 12, No. 19, 3012.

10.3390/math12193012
6

Buhrmester, V., Munch, D., and Arens, M., 2019, “Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey,” arXiv preprint, arXiv:1911.12116.

7

Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Ser, J. D., Diaz-Rodriguez, N., and Herrera, F., 2023, “Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence,” Information Fusion, Vol. 99, 101805.

10.1016/j.inffus.2023.101805
8

Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D., 2017, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 2017, pp. 618~626.

10.1109/ICCV.2017.74
9

Zhang, J., Huang, J., Jin, S., and Lu, S., 2024, “Vision-Language Models for Vision Tasks: A Survey,” arXiv preprint, arXiv:2304.00685.

10.1109/TPAMI.2024.3369699
10

Cheng, K., Yantao, L., Xu, F., Zhang, J., Zhou, H., and Liu, Y., 2025, “Vision-Language Models Can Self-Improve Reasoning via Reflection,” Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, Albuquerque, New Mexico, 2025, pp. 8876~8892.

10.18653/v1/2025.naacl-long.447
11

He, K., Zhang, X., Ren, S., and Sun, J., 2015, “Deep Residual Learning for Image Recognition,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, Boston, Massachusetts, USA, 2015, pp. 770~778.

10.1109/CVPR.2016.90
12

Owens, F. J. and Murphy, M. S., 1988, “A short-time Fourier transform,” Signal Processing, Vol. 14, No. 1, pp. 3~10.

10.1016/0165-1684(88)90040-0
13

Junior, R. F. R., Areias, I. A., Campos, M. M., Teixeira, C. E., Eduardo, L., Silva, B., and Gomes, G. F., 2022, “Fault Detection and Diagnosis in Electric Motors Using Convolution Neural Network and Short-Time Fourier Transform,” Journal of Vibration Engineering & Technologies, Vol. 10, pp. 2531~2542.

10.1007/s42417-022-00501-3
14

Piedad, E., Mayordo, Z. G., Prieto-Araujo, E., and Gomis-Bellmunt, O., 2024, “Deep Learning-Based Machine Condition Diagnosis Using Short-Time Fourier Transformation Variants,” 2024 International Conference on Diagnostics in Electrical Engineering, Pilsen, Czech Republic, 2024, pp. 1~4.

10.1109/Diagnostika61830.2024.10693710
15

Ertargin, M., Yildirim, O., Orhan, A., 2024, “Classifying Induction Motor Faults Using Spectrogram Images with Deep Transfer Learning,” Proceedings of the 10th World Congress on Electrical Engineering and Computer Systems and Sciences (EECSS'24), Barcelona, Spain, 2024, pp. 113-1~113-7.

16

Ribeiro, M. T., Singh, S., and Guestrin, C., 2016, ““Why Should I Trust You?”: Explaining the Predictions of Any Classifier,” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 16), San Francisco, California, USA, 2016, pp. 1135~1144.

10.1145/2939672.2939778
17

Mey, O. and Neufeld, D., 2022, “Explainable AI Algorithms for Vibration Data-Based Fault Detection: Use Case-Adadpted Methods and Critical Evaluation,” Sensors, Vol. 22, No. 23, 9037.

10.3390/s2223903736501736PMC9736871
18

Brito, L. C., Susto, G. A., Brito, J. N., and Duarte, M. A. V., 2022, “Fault Diagnosis using eXplainable AI: a Transfer Learning-based Approach for Rotating Machinery exploiting Augmented Synthetic Data,” arXiv preprint, arXiv:2210.02974.

10.1016/j.eswa.2023.120860
19

Xu, G., Jin, P., Li, H., Song, Y., Sun, L., and Yuan, L., 2024, “LLaVA-CoT: Let Vision Language Models Reason Step-by-Step,” arXiv preprint, arXiv:2411.10440.

20

Zhang, R., Zhang, B., Li, Y.,Zhang, H., Sun, Z., Gan, Z., Yang, Y., Pang, R., and Yang, Y., 2024, “Improve Vision Language Model Chain-of-thought Reasoning,” arXiv preprint, arXiv:2410.16198.

10.18653/v1/2025.acl-long.82
21

Chen, B., Xu, Z., Kirmani, S., Ichter, B., Driess, D., Florence, P., Sadigh, D., Guibas, L., and Xia, F., 2024, “SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities,” arXiv preprint, arXiv:2401.12168.

10.1109/CVPR52733.2024.01370
22

Rajabi, N. and Kosecka, J., 2024, “Q-GroundCAM: Quantifying Grounding in Vision Language Models via GradCAM,” arXiv preprint, arXiv:2404.19128.

23

Lee, J. and Rew, J., 2025, “Vision-Language Model-Based Local Interpretable Model-Agnostic Explanations Analysis for Explainable In-Vehicle Controller Area Network Intrusion Detection,” Sensors, 25(10), 3020.

10.3390/s2510302040431814PMC12115109
24

Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., Xiong, H., and He, Q., 2019, “A Comprehensive Survey on Transfer Learning,” arXiv preprint, arXiv:1911.02685.

25

Deng, J., Dong, W., Socher, R., Li, L., Li, K., and Fei-Fei, L., 2009, “ImageNet: A large-scale hierarchical image database,” 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, Florida, USA, pp. 248~255.

10.1109/CVPR.2009.5206848
26

Cai, Z. and Peng, C., 2021, “A study on training fine-tuning of convolutional neural networks,” 2021 13th International Conference on Knowledge and Smart Technology (KST), Bangsaen, Chonburi, Thailand, pp. 84~89.

10.1109/KST51265.2021.9415793
27

National Information Society Agency, 2023, “Fault diagnosis data for autonomous driving,” AI Hub. [Online] Available: https://aihub.or.kr/aihubdata/data/view.do?dataSetSn=71347

28

Huang, G., Liu, Z., Maaten, L. V. D., and Weinberger, K. Q., 2017, “Densely Connected Convolutional Networks,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, pp. 2261~2269.

10.1109/CVPR.2017.243
29

Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., and Keutzer, K., 2016, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and >0.5MB model size,” arXiv preprint, arXiv:1602.07360.

30

Tan, M., Chen, B., Pang, R., Vasudevan, V., Sandler, M., and Howard, A., 2019, “MnasNet: Platform-Aware Neural Architecture Search for Mobile,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, pp. 2815~2823.

10.1109/CVPR.2019.00293
31

Howard, A., Sandler, M., Chu, G., Chen, L., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., Le, Q. V., and Adam, H., 2019, “Searing for MobileNetV3,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, pp. 1314~1324.

10.1109/ICCV.2019.00140
32

Ma, N., Zhang, X., Zheng, H., and Sun, J., 2018, “ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,” Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, pp. 116~131.

10.1007/978-3-030-01264-9_8
33

Simonyan, K. and Zisserman, A., 2014, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv preprint, arXiv:1409.1556.

34

Radosavovic, I., Kosaraju, R. P., Girshick, R., He, K., and Dollar, P., 2020, “Designing Network Design Spaces,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 10425~10433.

10.1109/CVPR42600.2020.01044
35

PyTorch Team, 2025, “torchvision,” [Online] Available: https://docs.pytorch.org/vision/stable/index.html

36

OpenAI, 2024, “GPT-4o System Card,” arXiv preprint, arXiv:2410.21276.

37

Optiz, J., 2024, “A Closer Look at Classification Evaluation Metrics and a Critical Reflection of Common Evaluation Practice,” arXiv preprint, arXiv:2404.16958.

10.1162/tacl_a_00675
38

Jiang, S., 2024, “Vehicle E/E Architecture and Key Technologies Enabling Software-Defined Vehicle,” SAE Technical Paper, No. 2024-01-2035.

10.4271/2024-01-2035
39

Hasan, S. and Irgens, P., 2025, “Electronic Control Unit Hardware Design Challenges for Software Defined Vehicle,” SAE Technical Paper, No. 2025-01-8136.

10.4271/2025-01-8136
40

Nissan Motor Corporation, 2024, “Electric Vehicle Powertrain (3-in-1),” Nissan Motor Corporation Global Website.

41

Xu, L., Teoh, S. S., and Ibrahim, H., 2024, “A Deep Learning Approach for Electric Motor Fault Diagnosis Based on Modified InceptionV3,” Scientific Reports, Vol. 14, No. 12344.

10.1038/s41598-024-63086-938811686PMC11137000
42

An, K., Lu, J., Wang, L., Wang, Y., Chen, G., and Wu, J., 2023, “Edge Solution for Real-Time Motor Fault Diagnosis Based on Efficient CNN,” IEEE Transactions on Instrumentation and Measurement, Vol. 72, 3516912.

10.1109/TIM.2023.3276513
43

Wang, C., Kao, I., and Perng, J., 2017, “Fault Diagnosis and Fault Frequency Determination of PM Synchronous Motor Based on Deep Learning,” Sensors and Materials, Vol. 29, No. 10, pp. 1457~1476.

44

Yang, H., Kim, J., and Park, S., 2024, “Motor Fault Diagnosis Using Attention-Based Multisensor Feature Fusion,” Energies, Vol. 17, No. 16, 4053.

10.3390/en17164053
Information
  • Publisher :Korean Auto-vehicle Safety Association
  • Publisher(Ko) :한국자동차모빌리티안전학회
  • Journal Title :Journal of Auto-vehicle Safety Association
  • Journal Title(Ko) :자동차안전학회지
  • Volume : 18
  • No :1
  • Pages :29-46
  • Received Date : 2025-09-11
  • Revised Date : 2025-12-30
  • Accepted Date : 2026-03-18