帳號:guest(3.133.148.31)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):林俊輝
作者(外文):Lin, Chun-Hui
論文名稱(中文):卷積神經網絡之自動缺陷識別之製造智慧_TFT-LCD陣列製程之實證研究
論文名稱(外文):Manufacturing Intelligence via Convolutional Neural Network for automatic defect recognition and An Empirical study in TFT-LCD array process
指導教授(中文):簡禎富
指導教授(外文):Chien, Chen-Fu
口試委員(中文):黎進財
陳暎仁
學位類別:碩士
校院名稱:國立清華大學
系所名稱:工業工程與工程管理學系碩士在職專班
學號:103036523
出版年(民國):106
畢業學年度:105
語文別:英文
論文頁數:27
中文關鍵詞:薄膜電晶體液晶顯示器自動光學檢測自動缺陷分類卷積神經網絡深度學習
外文關鍵詞:Automatic Defect Classification (ADC)Automatic Optical Inspection (AOI)Convolutional Neural Networks(CNNs)Deep Learning (DL)TFT-LCD
相關次數:
  • 推薦推薦:0
  • 點閱點閱:521
  • 評分評分:*****
  • 下載下載:12
  • 收藏收藏:0
在薄膜電晶體液晶顯示器(TFT-LCD)製造領域上,自動光學檢測(AOI)作為一種線上檢測系統,對於品質控制與製造效率扮演至關重要的角色,自動光學檢測系統機台雖具備足夠的缺陷偵測能力,但於缺陷識別能力上仍為不足,必須依賴人為的判斷,但受限於個人的經驗及判斷能力不同,其判斷結果並不十分可靠,為了解決這個問題,部分研究提出了一些自動缺陷分類(ADC)方法,但這些方法都需要復雜的影像處理程序及相關專長。本研究旨在開發以卷積神經網絡(CNNs)為基礎的ADC框架,其無需套用復雜的影像處理程序即可完成缺陷識別作業。此實證研究於台灣領先之TFT-LCD製造業進行,以驗證所提出之ADC框架的可行性。本研究結果指出,在沒有套用復雜的影像處理程序,缺陷樣式可以透過先進的卷積神經網絡來進行精確及快速的分類,因此,實證研究已驗證此研究提出的ADC框架確實可套用並能適應於缺陷樣式隨著時間變化的TFT-LCD製造環境中。
關鍵字:薄膜電晶體液晶顯示器、自動光學檢測、自動缺陷分類、卷積神經網絡、深度學習
For Thin film transistor liquid crystal display (TFT-LCD) manufacturing, in-line inspection system performed by automatic optical inspection (AOI) equipment is critical for quality control and manufacturing efficiency, however defective pattern classification still relies on human judgement which is time-consuming and the results unreliable due to the human limits, to address this problem, some automatic defect classification (ADC) approaches have been proposed, but they all require complex feature engineering and require image-processing expertise. This study aims to develop an ADC framework that applies the convolution neural networks (CNNs) without any handcrafted features. This empirical study was conducted in a leading TFT-LCD manufacturing in Taiwan to validate the viability of the proposed framework. The result is shown that without the complicated image-processing procedures in advance, the defective pattern can be precisely and quickly classified by the novel CNNs network and the proposed ADC framework is suitable to apply and adapt to the to the TFT-LCD manufacturing within the defective pattern changes over the time.
Keywords: Automatic Defect Classification (ADC), Automatic Optical Inspection (AOI), Convolutional Neural Networks(CNNs), Deep Learning (DL), TFT-LCD
Contents i
Figures list ii
Tables list iii
1. Introduction 1
1.1. Research Background 1
1.2. Research Purpose 3
1.3. Overview of the Paper 3
2. Related works 4
2.1. Automated Optical Inspection (AOI) 4
2.2. AOI in TFT-LCD manufacturing 5
2.3. Convolutional neural networks (CNNs) 6
3. Framework 11
3.1. Problem definition 12
3.2. Data preparation 13
3.3. CNNs construction 13
3.4. Performance evaluation 17
4. Empirical study 18
4.1. Problem Structing 18
4.2. Data preparation and preprocess 18
4.3. Model training and performance evaluation 21
5. Conclusion 25
References 26
Chen, Y.-J., Fan, C.-Y., & Chang, K.-H. (2016), “Manufacturing intelligence for reducing false alarm of defect classification by integrating similarity matching approach in CMOS image sensor manufacturing,” Computers & Industrial Engineering, 99, 465-473.
Chen, Y.-J., Lin, T.-H., Chang, K.-H., & Chien, C.-F. (2013), “Feature extraction for defect classification and yield enhancement in color filter and micro-lens manufacturing: An empirical study,” Journal of Industrial and Production Engineering, 30(8), 510-517.
Chou, P. B., Rao, A. R., Sturzenbecker, M. C., Wu, F. Y., & Brecher, V. H. (1997), “Automatic defect classification for semiconductor manufacturing,” Machine Vision and Applications, 9(4), 201-214.
Ciresan, D., Meier, U., Masci, J., & Schmidhuber, J. (2012), “Multi-column deep neural network for traffic sign classification,” Neural Networks, 32, 333-338.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009), “Imagenet: A large-scale hierarchical image database,” In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on (pp. 248-255). IEEE.
Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., & Darrell, T. (2014), “Decaf: A deep convolutional activation feature for generic visual recognition,” In International conference on machine learning (pp. 647-655).
He, K., Zhang, X., Ren, S., & Sun, J. (2016), “Deep residual learning for image recognition,” In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
Hubel, D. H., & Wiesel, T. N. (1968), “Receptive fields and functional architecture of monkey striate cortex,” The Journal of physiology, 195(1), 215-243.
Huang, C., Li, Y., Change Loy, C., & Tang, X. (2016), “Learning deep representation for imbalanced classification,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5375-5384).
Huang, S. H., & Pan, Y. C. (2015), “Automated visual inspection in the semiconductor industry: A survey,” Computers in industry, 66, 1-10.
Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., &Darrell, T. (2014), “Caffe: Convolutional architecture for fast feature embedding,” In Proceedings of the 22nd ACM international conference on Multimedia (pp. 675-678). ACM.
Kang, S. B., Lee, J. H., Song, K. Y., & Pahk, H. J. (2009), “Automatic defect classification of TFT-LCD panels using machine learning,” In Industrial Electronics, 2009. ISIE 2009. IEEE International Symposium on (pp. 2175-2177). IEEE.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012), “Imagenet classification with deep convolutional neural networks,” In Advances in neural information processing systems (pp. 1097-1105).
Kuo, C.-F., Hsu, C.-T. M., Fang, C.-H., Chao, S.-M., & Lin, Y.-D. (2013), “Automatic defect inspection system of colour filters using Taguchi-based neural network,” International Journal of Production Research, 51(5), 1464-1476.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998), “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, 86(11), 2278-2324.
Oquab, M., Bottou, L., Laptev, I., & Sivic, J. (2014), “Learning and transferring mid-level image representations using convolutional neural networks,” In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1717-1724).
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988), “Learning representations by back-propagating errors,” Cognitive modeling, 5(3), 1.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., & Berg, A. C. (2015), “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, 115(3), 211-252.
Sharif Razavian, A., Azizpour, H., Sullivan, J., & Carlsson, S. (2014), “CNN features off-the-shelf: an astounding baseline for recognition,” In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 806-813).
Simonyan, K., & Zisserman, A. (2014), “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556.
Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014), “Dropout: a simple way to prevent neural networks from overfitting,” Journal of machine learning research, 15(1), 1929-1958.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., . . . Rabinovich, A. (2015), “Going deeper with convolutions,” In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).
Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014), “Deepface: Closing the gap to human-level performance in face verification,” In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1701-1708).
Tseng, D.-C., Chung, I.-L., Tsai, P.-L., & Chou, C.-M. (2011), “Defect classification for LCD color filters using neural-network decision tree classifier,” International Journal of Innovative Computing Information and Control, 7(A), 3695-3707.
Yang, S.-W., Lin, C.-S., Lin, S.-K., & Chiang, H.-T. (2014), “Automatic defect recognition of TFT array process using gray level co-occurrence matrix,” Optik-International Journal for Light and Electron Optics, 125(11), 2671-2676.
Yosinski, J., Clune, J., Bengio, Y., & Lipson, H. (2014), “How transferable are features in deep neural networks?,” In Advances in neural information processing systems (pp. 3320-3328).
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *