帳號:guest(3.136.23.23)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):鄭至凱
作者(外文):Cheng, Chih-Kai
論文名稱(中文):基於強化學習建立多光源之打光策略以檢測多型態缺陷面積
論文名稱(外文):Formulation of Multi-Light Source Lighting Strategy Based on Reinforcement Learning to Detect Multi-Type Defect Area
指導教授(中文):蔡宏營
指導教授(外文):Tsai, Hung-Yin
口試委員(中文):丁川康
徐秋田
口試委員(外文):Ting, Chuan-Kang
Hus, Chin-Tien
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:106033607
出版年(民國):108
畢業學年度:107
語文別:中文
論文頁數:117
中文關鍵詞:自動化光學檢測多光源打光策略影像處理強化學習
外文關鍵詞:Automatic optical inspectionMulti-light source lighting strategyImage processingReinforcement learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:367
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
本研究提出在多光源的環境下建立出適當的打光策略並以玻璃蓋片之多種缺陷進行驗證。藉由結合影像處理以及強化學習,透過檢測設計好的缺陷模板以學習打光策略,預期在實際的缺陷檢測中利用學習後的多光源打光策略進行打光可以最有效率且有信心能檢測出最多缺陷。
本研究整合檢測試片製作、打光環境架設、影像處理以及強化學習四個部分達上述之目標。第一部分在玻璃蓋片上製作出特殊圖形之多種缺陷;第二部份利用C++撰寫打光環境之控制系統,透過RS232對30度低角光源、60度低角光源以及外同軸光源給予不同光學編碼調整各光源強度,使用iDS工業相機提供之函式庫控制相機參數並擷取出缺陷影像;第三部分藉由OpenCV函式庫對缺陷影像進行影像處理包括:影像校準、缺陷提取以及計算雜訊指標;第四部份則是利用無模型強化學習中演員-評論家(Actor-Critic)架構對離散化之打光參數進行學習調整。
目前本研究建立的多光源檢測環境其最小檢測尺寸為20 μm,而學習過程中每個發佈動作從執行到最後賦予獎勵值進行學習,其運行時間平均為2.6秒,訓練時間40-50分鐘即可得到適當的打光策略。利用訓練後的打光策略比訓練前之打光參數擷取出的缺陷面積提升37%,驗證此打光策略能夠有效提升缺陷檢測系統之能力。
This work proposed a formulation of the lighting strategy verified by the glass lid defects in a multi-light source environment. By integrating image processing, reinforcement learning and designed defect template, the inspection system learned the lighting strategy. It is expected that the use of the multi-source lighting strategy after learning process can efficiently detect the multiple defects in the practical inspection.
This work integrated the sample production, lighting environment setup, image processing, and reinforcement learning to achieve the above goal. First of all, we manufactured a variety of specific-patterned defects on the glass substrate. The second section, we exploited C++ to build the control system of the lighting environment which was consisted of several light sources adjusted by RS232 in specific lighting encoder such as 30 degree low-angle light source, the 60 degree low-angle light source, and the external coaxial light source. And then, we attempted to utilize the library provided by iDS to control the camera parameters and captured the defect images. The third section, we exploited the OpenCV library to conduct the image processing including image calibration, defects extraction and calculation of noise metrics. Finally, we utilized model-free reinforcement learning based on the Actor-Critic architecture to learn the discrete lighting strategy.
At present, the minimum detective capability of the inspection system is about 20 μm. It cost 2.6 seconds on average during each execution of action, and then the extraordinary lighting strategy was obtained after training 40-50 minutes. The trained lighting strategy can increase the 37% of defect area compared with the untrained parameters. According to experiment results, it was known that the lighting strategy can indeed effectively improve the capability of the inspection system.
ABSTRACT I
摘要 III
謝誌 IV
目錄 VII
圖目錄 X
表目錄 XIX
第一章 緒論 1
1.1 前言 1
1.2 研究動機 2
第二章 文獻回顧 4
2.1 自動化光學檢測 4
2.2 光源配置 11
2.3 影像處理 16
2.3.1影像除噪 18
2.3.2影像校準 22
2.3.3影像二值化 26
2.3.4邊緣檢測 28
2.3.5影像形態學 30
2.3.6連通區域標記 32
2.3.7影像雜訊指標 34
2.4 強化學習 39
2.4.1值函數 41
2.4.2策略梯度 42
2.4.3演員-評論家 44
第三章 研究方法 47
3.1 缺陷樣本製作 49
3.1.1缺陷圖形設計 50
3.1.2製作方式以及量化 51
3.2 打光環境建構 52
3.2.1系統支架、固定器以及試片載台 52
3.2.2光源系統 54
3.2.3影像擷取系統 57
3.2.4光源及相機之控制 59
3.3 影像處理 61
3.3.1影像除噪 62
3.3.2影像校準 63
3.3.3缺陷提取 65
3.3.4雜訊指標 67
3.4 強化學習 68
3.4.1深度明確性策略梯度(DDPG) 72
3.4.2離散化動作輸出 74
第四章 研究結果與討論 77
4.1 系統建構呈現 77
4.1.2檢測平台 77
4.1.2檢測試片製作 78
4.1.3打光拍照控制程序 80
4.1.4影像處理及學習程序 81
4.1.5即時視覺化圖表 82
4.2 影像處理分析 83
4.2.1影像校準探討 83
4.2.2缺陷提取探討 86
4.2.3雜訊指標探討 93
4.3 強化學習分析 96
4.3.1訓練過程探討 96
4.3.2獎勵函數制定探討 100
4.3.3狀態動作設定探討 103
4.3.4訓練結果分析 105
4.4 系統分析 108
4.4.1系統誤差 108
4.4.2運算效率 109
第五章 結論 111
5.1 研究貢獻 112
5.2 未來展望 113
參考資料 114

[1] CTIMES (CMOS/CCD) website. (November 2018). Available: https://www.ctimes.com.tw/DispArt/tw/CCD/CMOS/
[2] 陳宗達, "CMOS玻璃蓋片自動光學檢測機台之設計及開發," 碩士, 工業工程與管理系所, 國立交通大學, 新竹市, 2005.
[3] 52RD (CMOS/CCD) website. (June 2019). Available: http://www.huiyan.com.cn/a/pc/20141024/223.htm
[4] T. S. Newman and A. K. Jain, "A survey of automated visual inspection." Computer Vision and Image Understanding, vol. 61, no. 2, pp. 231-262, 1995.
[5] C. Huang, C. Liao, A. Huang, and Y. Tarng, "An automatic optical inspection of drill point defects for micro-drilling." The International Journal of Advanced Manufacturing Technology, vol. 37, no. 11-12, pp. 1133-1145, 2008.
[6] Y. Li, Y. F. Li, Q. L. Wang, D. Xu, and M. Tan, "Measurement and defect detection of the weld bead based on online vision inspection." IEEE Transactions on Instrumentation and Measurement, vol. 59, no. 7, pp. 1841-1849, 2010.
[7] R. Ye, M. Chang, C. S. Pan, C. A. Chiang, and J. L. Gabayno, "High-resolution optical inspection system for fast detection and classification of surface defects." International Journal of Optomechatronics, vol. 12, no. 1, pp. 1-10, 2018.
[8] Y. J. Jeon, D. c. Choi, S. J. Lee, J. P. Yun, and S. W. Kim, "Steel-surface defect detection using a switching-lighting scheme." Applied Optics, vol. 55, no. 1, pp. 47-57, 2016.
[9] M. Chang, B. C. Chen, J. L. Gabayno, and M. F. Chen, "Development of an optical inspection platform for surface defect detection in touch panel glass." International Journal of Optomechatronics, vol. 10, no. 2, pp. 63-72, 2016.
[10] M. F. Chen, B. C. Chen, C. W. Chen, R. C. Weng, and M. Chang, "Design and implementation of an illumination device for optical inspection of defects in glass substrates." Ninth International Symposium on Precision Engineering Measurement and Instrumentation(ISPEMI), vol. 9446, p. 94464B, 2015.
[11] J. Uusitalo and R. Tuokko, "Setting up task-optimal illumination automatically for inspection purposes." International Society for Optics and Photonics(SPIE), vol. 6503, p. 65030K, 2007.
[12] S. Vitabile, G. Pollaccia, G. Pilato, and F. Sorbello, "Road signs recognition using a dynamic pixel aggregation technique in the HSV color space." International Conference on Image Analysis and Processing(ICIAP 2001), p. 572, 2001.
[13] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition." Computer Vision and Pattern Recognition(CVPR), 2014.
[14] M. Idesawa, "High-precision image position sensing methods suitable for 3-D measurement." Optics and Lasers in Engineering, vol. 10, no. 3-4, pp. 191-204, 1989.
[15] S. G. Chang, B. Yu, and M. Vetterli, "Adaptive wavelet thresholding for image denoising and compression." IEEE Transactions on Image Processing, vol. 9, no. 9, pp. 1532-1546, 2000.
[16] Y. L. Liu, J. Wang, X. Chen, Y. W. Guo, and Q. S. Peng, "A robust and fast non-local means algorithm for image denoising." 10th IEEE International Conference on Computer-Aided Design and Computer Graphics(CAD/Graphics), vol. 23, pp. 270-279, 2008.
[17] A. Mehle, M. Bukovec, B. Likar, and D. Tomaževič, "Print registration for automated visual inspection of transparent pharmaceutical capsules." Machine Vision and Applications, vol. 27, no. 7, pp. 1087-1102, 2016.
[18] G. Hua, W. Huang, and H. Liu, "Accurate image registration method for PCB defects detection." The Journal of Engineering, vol. 2018, no. 16, pp. 1662-1667, 2018.
[19] N. Otsu, "A threshold selection method from gray-level histograms." IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979.
[20] J. Sauvola and M. Pietikäinen, "Adaptive document image binarization." IEEE Transactions on Image Processing, vol. 33, no. 2, pp. 225-236, 2000.
[21] J. Canny, "A computational approach to edge detection." IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, 1986.
[22] Wikipedia-Canny edge detector. (October 2018). Available: https://en.wikipedia.org/wiki/Canny_edge_detector
[23] Wikipedia-Sobel operator. (June 2019). Available: https://en.wikipedia.org/wiki/Sobel_operator
[24] J. M. Park, C. G. Looney, and H. C. Chen, "Fast connected component labeling algorithm using a divide and conquer technique." 15th International Conference on Computers and their Applications(CATA), vol. 4, pp. 4-7, 2000
[25] L. He, Y. Chao, K. Suzuki, and K. Wu, "Fast connected-component labeling." Pattern Recognition, vol. 42, no. 9, pp. 1977-1987, 2009.
[26] M. G. Choi, J. H. Jung, and J. W. Jeon, "No-reference image quality assessment using blur and noise." International Journal of Computer Science and Engineering, vol. 3, no. 2, pp. 76-80, 2009.
[27] X. Zhu and P. Milanfar, "A no-reference sharpness metric sensitive to blur and noise." 2009 International Workshop on Quality of Multimedia Experience(QoMEX), pp. 64-69, 2009
[28] X. Guo and Y. Fang, 用強化學習快速上手AI. 電子工業出版社, 2018.
[29] C. J. Watkins and P. Dayan, "Q-learning." Machine learning, vol. 8, no. 3-4, pp. 279-292, 1992.
[30] V. Mnih et al., "Human-level control through deep reinforcement learning." Nature, vol. 518, no. 7540, p. 529, 2015.
[31] H. Van Hasselt, A. Guez, and D. Silver, "Deep Reinforcement Learning with Double Q-Learning." Association for the Advancement of Artificial Intelligence(AAAI), vol. 2, p. 5, 2016
[32] Z. Wang, T. Schaul, M. Hessel, H. Van Hasselt, M. Lanctot, and N. De Freitas, "Dueling network architectures for deep reinforcement learning." arXiv:1511.06581, 2015
[33] T. Schaul, J. Quan, I. Antonoglou, and D. Silver, "Prioritized experience replay." arXiv:1511.05952, 2015
[34] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, "Trust region policy optimization." International Conference on Machine Learning(ICML), pp. 1889-1897, 2015
[35] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, "Proximal policy optimization algorithms." arXiv:1707.06347, 2017
[36] V. Mnih et al., "Asynchronous methods for deep reinforcement learning." International Conference on Machine Learning(ICML), pp. 1928-1937, 2016
[37] T. P. Lillicrap et al., "Continuous control with deep reinforcement learning." arXiv:1509.02971, 2015
[38] G. Dulac Arnold et al., "Deep reinforcement learning in large discrete action spaces." arXiv:1512.07679, 2015
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *