تحقیقات موتور

تحقیقات موتور

سامانة هوشمند بازرسی خودکار برای تشخیص رتبه‌بندی میل‌لنگ مبتنی بر بینایی ماشین و ‏یادگیری عمیق

نوع مقاله : مقاله پژوهشی

نویسندگان
1 ادارة مهندسی محصول، شرکت تحقیق، طراحی و تولید موتور ایران خودرو (ایپکو)، تهران، ایران‏
2 گروه مهندسی مکانیک، دانشگاه ملی مهارت، تهران، ایران
چکیده
انطباق یاتاقان‌­های اصلی با رتبه میل‌­لنگ، یک ملاحظة مهم در عملیات نصب یاتاقان است. اگر کاربر خط تولید دقت نکند، باعث کاهش قابل توجه کیفیت موتور نهایی همبندی شده می­‌شود و همچنین یکسری ایرادها در آن ایجاد می­‌شود. سامانه‌های بینایی ماشین ظرفیت اجرای تشخیص خطای خودران را دارند که می‌توانند زمان بازرسی را به مقدار قابل توجهی کاهش دهند و منجر به بازرسی‌های چندباره، دقیق‌تر و عینی‌تر شوند. در اینجا، سامانه‌‏ای برای بازرسی توسعه داده شد که قادر به تشخیص خودکار رتبه‌بندی‌‏های میل­‌لنگ از تصاویر میل­‌لنگ است. شرایط نوری خاصی برای به دست آوردن تصاویر مناسب از میل­‌لنگ­‌ها طراحی شد. در این راستا، یک رویکرد تشخیصی کارآمد بر اساس روش بخش‌­بندی معنایی ارائه شد. دو معماری مختلف شبکه عصبی پیچشی شامل MobileNet و VGG19 آموزش و ارزیابی شدند. MobileNet نشان داد که بهترین سازش بین دقت، با امتیاز IoU 85 درصد و زمان اعتبارسنجی، با 0.2 میلی ثانیه برای آشکارکردن حروف حک شده روی میل‌­لنگ است. با توجه به نتایج به‌دست‌آمده، رویکرد پیشنهادی می‌تواند به عنوان ابزاری کارآمد، دقیق و سریع برای تشخیص خودکار رتبه‌بندی‌‏های میل­‌لنگ در ایستگاه همبندی یاتاقان استفاده شود.
کلیدواژه‌ها

عنوان مقاله English

Automatic intelligent inspection system for crankshaft grade detection based on ‎machine vision and deep learning

نویسندگان English

Alireza Yazdani Jo 1
Ashkan Moosavian 2
1 Department of Product Engineering, Irankhodro Powertrain Company (IPCo), Tehran, Iran
2 Department of Mechanical Engineering, National University of Skills (NUS), Tehran, Iran
چکیده English

The adaption of main bearings with crankshaft grades is an important consideration in bearing installation tasks. If an operator is not careful, it will cause a significant decrease in the quality of the final assembled engine and also cause some defects. Machine vision systems have the potential to implement autonomous error detection that can significantly reduce inspection time and lead to more frequent, precise, and objective inspections. Herein, an inspection system was developed, capable of automatically detecting crankshaft grades from crankshaft images.  A specific lighting condition was designed to obtain proper images of the crankshafts. An efficient diagnostic approach based on the semantic segmentation method was presented in this regard. Two different convolutional neural network (CNN) architectures, including MobileNet and VGG19, were trained and evaluated. MobileNet was revealed to be the best compromise between accuracy, with an IoU-Score of 85%, and validation time, with 0.2 ms for discovering the characters engraved on the crankshaft. According to the obtained results, the proposed approach could be used as an efficient, accurate, and fast tool for the automatic detection of crankshaft grades in bearing assembly stations.

کلیدواژه‌ها English

Machine Vision
Deep Learning
Engine Production Line
Crankshaft Grade
Automatic Inspection
[1] Van Basshuysen R, Schäfer F, editors. Internal combustion engine handbook. SAE International; 2016 Mar 7.
[2] Yamagata H. The science and technology of materials in automotive engines. Taylor & Francis US; 2005 Aug 29.
[3] Huang YP, Chen CH, Chang YT, Sandnes FE. An intelligent strategy for checking the annual inspection status of motorcycles based on license plate recognition. Expert Systems with Applications. 2009 Jul 1;36(5):9260-7. doi: 10.1016/j.eswa.2008.12.006
[4] Qadri MT, Asif M. Automatic number plate recognition system for vehicle identification using optical character recognition. In2009 international conference on education technology and computer 2009 Apr 17 (pp. 335-338). IEEE. doi: 10.1109/ICETC.2009.54
[5] Patil A, Dhanvijay M. Blob detection technique using image processing for identification of machine printed characters. International Journal of Innovations in Engineering Research and Technology. 2015;2(10):1-8.
[6] Patil AV, Dhanvijay MM. Engraved character recognition using computer vision to recognize engine and chassis numbers: Computer vision technique to identify engraved numbers. In2015 International Conference on Information Processing (ICIP) 2015 Dec 16 (pp. 151-154). IEEE. doi: 10.1109/INFOP.2015.7489368
[7] Verma A, Sharma M, Hebbalaguppe R, Hassan E, Vig L. Automatic container code recognition via spatial transformer networks and connected component region proposals. In2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA) 2016 Dec 18 (pp. 728-733). IEEE. doi: 10.1109/ICMLA.2016.0130
[8] Wu J, Cai N, Li F, Jiang H, Wang H. Automatic detonator code recognition via deep neural network. Expert Systems with Applications. 2020 May 1;145:113121. doi: 10.1016/j.eswa.2019.113121
[9] Koçer HE, Yasak MS. Solving the classification problem of circular metal objects with engraved characters by image processing methods. Konya Journal of Engineering Sciences. 2020;8(1):32-50. doi: 10.36306/konjes.585000
[10] Capela S, Silva R, Khanal SR, Campaniço AT, Barroso J, Filipe V. Engine labels detection for vehicle quality verification in the assembly line: a machine vision approach. InCONTROLO 2020: Proceedings of the 14th APCA International Conference on Automatic Control and Soft Computing, July 1-3, 2020, Bragança, Portugal 2021 (pp. 740-751). Springer International Publishing.
[11] Pandey BK, Pandey D, Wariya S, Agarwal G. A deep neural network-based approach for extracting textual images from deteriorate images. EAI Endorsed Transactions on Industrial Networks and Intelligent Systems. 2021;8(28):e3.
[12] Ren Z, Fang F, Yan N, Wu Y. State of the art in defect detection based on machine vision. International Journal of Precision Engineering and Manufacturing-Green Technology. 2022 Mar;9(2):661-91. doi: 10.1007/s40684-021-00343-6
[13] Ikeuchi K, Matsushita Y, Sagawa R, Kawasaki H, Mukaigawa Y, Furukawa R, Miyazaki D. Active lighting and its application for computer vision. Springer; 2020.
[14] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Communications of the ACM. 2017 May 24;60(6):84-90.
[15] Alves TS, Pinto MA, Ventura P, Neves CJ, Biron DG, Junior AC, De Paula Filho PL, Rodrigues PJ. Automatic detection and classification of honey bee comb cells using deep learning. Computers and Electronics in Agriculture. 2020 Mar 1;170:105244. doi: 10.1016/j.compag.2020.105244
[16] LeCun Y, Bengio Y, Hinton G. Deep learning. nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539
[17] Thoma M. A survey of semantic segmentation. arXiv preprint arXiv:1602.06541. 2016 Feb 21. doi: 10.48550/arXiv.1602.06541
[18] Moosavian A, Hosseini A, Jafari SM, Chitsaz I, Baradaran Shokouhi S. Machine vision–based measurement approach for engine accessory belt transverse vibration based on deep learning method. Automotive Science and Engineering. 2022;12(2):3838-46. doi: 10.22068/ase.2022.611
[19] Bansal M, Kumar M, Sachdeva M, Mittal A. Transfer learning for image classification using VGG19: Caltech-101 image data set. Journal of ambient intelligence and humanized computing. 2023 Apr 1:1-2. doi: 10.1007/s12652-021-03488-z
[20] Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. 2017 Apr 17. doi: 10.48550/arXiv.1704.04861
[21] Huang G, Liu Z, Weinberger KQ. Densely connected convolutional networks. CoRR abs/1608.06993 (2016). 2015. doi: 10.48550/arXiv.1608.06993
دوره 71، شماره 4 - شماره پیاپی 77
مقالات انگلیسی
زمستان 1403
صفحه 33-43

  • تاریخ دریافت 17 خرداد 1402
  • تاریخ بازنگری 27 مرداد 1402
  • تاریخ پذیرش 16 فروردین 1403