帳號:guest(3.139.104.16)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):呂學平
作者(外文):Lu, Hsueh-Ping
論文名稱(中文):結合卷積神經網路、條件生成對抗網路與遷移式學習集成網路於薄膜液晶顯示器Mura瑕疵分類
論文名稱(外文):CNN Joint with Conditional GAN for TFT-LCD Mura Defect Classification Using Transfer Learning Ensemble
指導教授(中文):蘇朝墩
指導教授(外文):Su, Chao-Ton
口試委員(中文):駱景堯
田方治
邱銘傳
蕭宇翔
口試委員(外文):Low, Chin-Yao
Tien, Fang-Chih
Chiu, M.C.
Hsiao, Y.H.
學位類別:博士
校院名稱:國立清華大學
系所名稱:工業工程與工程管理學系
學號:100034804
出版年(民國):110
畢業學年度:109
語文別:英文
論文頁數:60
中文關鍵詞:薄膜液晶顯示器卷積神經網絡Mura缺陷條件生成對抗網絡莫爾條紋遷移式學習集成學習
外文關鍵詞:Thin-film transistor liquid crystal displayconvolutional neural networkMura defectConditional Generative adversarial networkMoiré patternTransfer learningensemble learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:534
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
對於薄膜液晶顯示器(TFT-LCD)產業而言,面板缺陷識別是至關重要的問題。 Mura缺陷會導致屏幕顯示不均勻,Mura缺陷識別是所有缺陷識別中最具挑戰性的問題。近年來,人工智能技術已成功應用於眾多領域,但是,這樣的方法需要大量的訓練圖像數據。TFT-LCD產業,產品差異化和客製化的策略已迫使產業從大規模生產轉向少量多樣的生產模式,在這樣的環境下,很難收集大量的訓練數據;此外,在產線檢查站,透過AOI系統收集到的Mura缺陷的圖像,經常被莫爾條紋污染。莫爾條紋是檢查相機傳感器的像素網格和面板屏幕之間發生干擾的結果,嚴重影響了圖像的視覺品質,導致難以識別Mura缺陷。因此,在不損害圖像品質的情況下去除缺陷圖像中的莫爾條紋是非常重要的。本研究探討了這個問題,並提出了一種使用條件生成對抗網絡(Conditional GAN)從缺陷圖像中消除莫爾條紋的方法。此外,本研究也開發了一種遷移式學習集成網路模型,該模型基於有限訓練數據,聚合了多個卷積神經網絡(CNN),進行Mura缺陷分類。透過使用所提出的方法進行了實際案例研究,針對TFT-LCD面板中的Mura缺陷進行分類,結果顯示,本研究提出的方法為Mura缺陷分類提供了較優的準確性。因此,該方法可以成為TFT-LCD產業中手動分類的可行替代方法。
Display panel defect recognition is a critical concern for thin-film transistor liquid crystal display (TFT-LCD) manufacturers. Mura defects cause uneven screen displays and are the most challenging to detect among all visual defects. In recent years, artificial intelligence technologies have been successfully applied in numerous areas. However, such approaches require large amounts of training image data. Simultaneously, product differentiation and customization strategies have forced the TFT-LCD manufacturing industry to shift from mass production to high-mix, low-volume, and short-life-cycle production. In this environment, collecting a large amount of training data is difficult. Moreover, images with Mura defects captured at the inspection station remain challenging because they are often contaminated with moiré patterns. Moiré patterns, a result of interference between the pixel grids of the inspection camera’s sensor and panel screen, severely affect the visual quality of images and cause difficulty in determining Mura defects. Therefore, removing moiré patterns in defect images without impairing image quality is critical. In this study, we explored this problem and proposed an approach to eliminate moiré patterns from defect images by using a conditional generative adversarial network (CGAN). In addition, we developed a transfer learning ensemble model that aggregates multiple convolutional neural networks (CNNs) based on a denoising network for defect classification in a limited training data set. An industrial case study was conducted to classify Mura defects in TFT-LCD panels by using the proposed approach. The results demonstrated that the proposed method provides improved accuracy for Mura defect classification. This method can therefore become a viable alternative to manual classification in the TFT-LCD manufacturing industry.
摘要 i
ABSTRACT iii
誌謝 v
CONTENTS vi
TABLES viii
FIGURES ix
1. INTRODUCTION 1
1.1 Overview and Motivations 1
1.2 Purposes 2
1.3 Organization 3

2. RELATED WORKS 4
2.1 TFT-LCD Defect and MURA Recognition 4
2.2 Image De-noising 5
2.3 GAN, CGAN, U-net, Attention Mechanism 6
2.3.1 GAN mechanism 6
2.3.2 CGAN mechanism 7
2.3.3 U-Net 8
2.3.4 Attention mechanism 9
2.4 Transfer Learning and Ensemble Learning 11
2.4.1 Transfer Learning 11
2.4.2 Ensemble Learning 12
2.5 Data augmentation 13

3. PROPOSED APPROACH 16
3.1 Combination of CNN and CGAN for eliminating moiré pattern from defect images 17
3.1.1 Attentive-RNN 17
3.1.2 U-net Network 21
3.1.3 Discriminator 26
3.1.4 Conditional Generative Adversarial Loss Function 27
3.2 Using a CNN transfer learning ensemble for Mura defect classification 28
3.2.1 Transfer Learning 29
3.2.2 Ensemble Learning 31

4. Case Study 33
4.1 The Case Problem 33
4.2 The Experimental Settings 33
4.3 De-Noise Network Training 35
4.3.1 Performance Comparison on Demoiréing 37
1) Quantitative evaluation 37
2) Qualitative evaluation 39
4.3.2 De-noising Network Experimental Results 41
4.3.3 De-noise network discussion 42
4.4 The Classification Network Training 44
4.4.1 The Classification Network Experimental Results 46
4.4.2 The Classification network discussion 48

5. CONCLUSIONS 52
REFERENCES 54
REFERENCES
[1] J. Z. Tsai, R.-S. Chang, and T.-Y. Li, “Detection of gap mura in TFT LCDs by the interference pattern and image sensing method,” IEEE Trans. Instrum. Meas., vol. 62, no. 11, pp. 3087–3092, Nov. 2013.
[2] A. Y. Jazi, J. J. Liu, and H. Lee, "Automatic inspection of TFT-LCD glass substrates using optimized support vector machines." IFAC Proceedings Volumes 45.15, 325-330, 2012.
[3] S. Mei, H. Yang, and Z. Yin, “Unsupervised-learning-based feature- level fusion method for mura defect recognition,” IEEE Trans. Semicond. Manuf., vol. 30, no. 1, pp. 105–113, Feb. 2017.
[4] B. Chen, Z. Fang, Y. Xia, L. Zhang, Y. Huang, and L. Wang, “Accurate defect detection via sparsity reconstruction for weld radiographs,” NDT E Int., vol. 94, pp. 62–69, Mar. 2018.
[5] L.-F. Chen, C.-T. Su, and M.-H. Chen, “A neural-network approach for defect recognition in TFT-LCD photolithography process,” IEEE Trans. Electron. Packag. Manuf., vol. 32, no. 1, pp. 1–8, Jan. 2009.
[6] T. Nakazawa, and D. V. Kulkarni, “Wafer map defect pattern classification and image retrieval using convolutional neural network,” IEEE Trans. Semicond. Manuf., vol. 31, no. 2, pp. 309–314, May 2018.
[7] T.-Y. Li, J.-Z. Tsai, R.-S. Chang, L.-W. Ho, and C.-F. Yang, “Pretest gap mura on TFT LCDs using the optical interference pattern sensing method and neural network classification,” IEEE Trans. Ind. Electron., vol. 60, no. 9, pp. 3976–3982, Sep. 2013.
[8] K. Taniguchi, K. Ueta, and S. Tatsumi, “A mura detection method,” Pattern Recogn, vol. 39, no. 6, pp. 1044-1052, 2006.
[9] C.J. Lu and D.M. Tsai, “Automatic defect inspection for LCDs using singular value decomposition,” The International Journal of Advanced Manufacturing Technology, vol. 25, no. 1-2, pp. 53-61, 2005.
[10] D.C. Tseng, Y.C. Lee, and C.E. Shie, “LCD mura detection with multi-image accumulation and multi-resolution background subtraction,” Int. J. Innovative Comput., Inf. Control, vol. 8, no. 7, pp. 4837-4850, 2012.
[11] K. Li, H. Li, Y. Liu, and P. Liang, “Background suppression of LCD mura defect using B-spline surface fitting,” Opto-Electronic Engineering, vol. 41, no. 2, pp. 33-39, 2014.
[12] S. Jin, C. Ji, C. Yan, and J. Xing, “TFT-LCD mura defect detection using DCT and the Dual-γ piecewise exponential transform,” Precision Engineering, vol. 54, pp. 371-378, 2018
[13] Y.J. Chen, T.H. Lin, K.H. Chang, and C.F. Chien, “Feature extraction for defect classification and yield enhancement in color filter and micro-lens manufacturing: An empirical study,” Journal of Industrial and Production Engineering, vol. 30, no. 8, pp. 510–517, 2013.
[14] M. Kim, M. Lee, M. An, and H. Lee, “Effective automatic defect classification process based on CNN with stacking ensemble model for TFT-LCD panel,” Journal of Intelligent Manufacturing, vol. 31, no. 5, pp. 1165–1174, 2020, doi: 10.1007/s10845-019-01502-y.
[15] H. Yang, S. Mei, K. Song, B. Tao, and Z. Yin, “Transfer-learning- based online mura defect classification,” IEEE Trans. Semicond. Manuf., vol. 31, no. 1, pp. 116–123, Feb. 2018.
[16] Z. Wei, J. Wang, H. Nichol, S. Wiebe, and D. Chapman, “A median Gaussian filtering framework for Moiré pattern noise removal from X-ray microscopy image,” Micron, vol. 43, nos. 2–3, pp. 170–176, 2012.
[17] F. Liu, J. Yang, and H. Yue, “Moiré pattern removal from texture images via low-rank and sparse matrix decomposition,” in Proc. IEEE Vis. Commun. Image Process. (VCIP), Singapore, pp. 1–4, 2015.
[18] J. Yang, X. Zhang, C. Cai, and K. Li, “Demoiréing for screen-shot images with multi-channel layer decomposition,” in Proc. IEEE Vis. Commun. Image Process. (VCIP), St. Petersburg, FL, USA, pp. 1–4, 2017.
[19] T. H. Kim and S. I. Park, “Deep context-aware descreening and rescreening of halftone images,” ACM Trans. Graph., vol. 37, no. 4, pp. 1–12, 2018.
[20] Y. Sun, Y. Yu, and W. Wang, “Moiré photo restoration using multiresolution convolutional neural networks,” IEEE Trans. Image Process., vol. 27, no. 8, pp. 4160–4172, Aug. 2018.
[21] B. Liu, X. Shu, and X. Wu, “Demoire ́ing of camera-captured screen images using deep convolutional neural network,” arXiv preprint, arXiv:1804.03809, 2018.
[22] T. Gao, Y. Guo, X. Zheng, Q. Wang, and X. Luo, “Moiré pattern 844 removal with multi-scale feature enhancing network,” in Proc. IEEE Int. Conf. Multimedia Expo Workshops (ICMEW), Shanghai, China, pp. 240–245, 2019.
[23] S. Yuan, R. Timofte, G. Slabaugh, and A. Leonardis, “AIM 2019 challenge on image demoireing: Dataset and study,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshop (ICCVW), Seoul, South Korea, Oct. 2019, pp. 3526–3533. [Online]. Available: https://arxiv.org/abs/1911.02498
[24] Z. Wang, Q. She, and T. E. Ward, “Generative adversarial networks: A survey and taxonomy,” in Proc. IEEE Trans. Emerg. Topics Comput. Intell. (TETCI), 2019. [Online]. Available: https://arxiv.org/abs/1906.01529
[25] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. 27th Int. Conf. Neural Inf. Process. Syst. (NIPS’14), Dec. 2014, pp. 2672–2680.
[26] D. W. Kim, J. R. Chung, and Se. W. Jung, “GRDN: Grouped resid- ual dense network for real image denoising and GAN-based real-world noise modeling,” in Proc. Comput. Vis. Pattern Recognit. (CVPR), Long Beach, CA, USA, 2019, pp. 2086–2094. [Online]. Available: https://arxiv.org/abs/1905.11172
[27] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , pp. 5967–5976, 2017.
[28] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” 2014. [Online]. Available: arXiv:1411.1784.
[29] X. Yi and P. Babyn, “Sharpness-aware low-dose CT denoising using conditional generative adversarial network,” J. Digit. Imag., vol. 31, no. 5, pp. 655–669, 2018.
[30] H.-J. Kim and D. Lee, “Image denoising with conditional generative adversarial networks (CGAN) in low dose chest images,” Nuclear Instrum. Methods Phys. Res. A, Accelerators Spectrometers Detectors Assoc. Equipment, vol. 954, Feb. 2020, Art. no. 161914.
[31] F. Mattias P. Heinrich, M. Stille, and T. M. Buzug, “Residual U-Net convolutional neural network architecture for low-dose CT denoising,” Curr. Directions Biomed. Eng., vol. 4, no. 1, pp. 297–300, 2018. Online]. Available: https://doi.org/10.1515/cdbme-2018-0072
[32] M. Livne, J. Rieger, O. U. Aydin, A. A. Taha, E. M. Akay, T. Kossen, J. Sobesky, J. D. Kelleher, K. Hildebrand, D. Frey, and V. I. Madai “A U-Net deep learning framework for high performance vessel segmentation in patients with cerebrovascular disease,” Front Neurosci., 13:97. doi: 10.3389. 2019.
[33] M. Kolařík, R. Burget, V. Uher, K. Říha, and M. K. Dutta, “Optimized high resolution 3D Dense-U-Net network for brain and spine segmentation,” Applied Sciences, vol. 9, no. 3, pp. 1-17, 2019.
[34] J. Wu, Y. Zhang, K. Wang, and X. Tang, "Skip connection U-Net for white matter hyperintensities segmentation from MRI," IEEE Access, vol. 7, pp. 155194-155202, 2019.
[35] V. Mnih, N. Heess, A. Graves, and K. kavukcuoglu, “Recurrent models of visual attention,” In Neural Information Processing Systems (NIPS), pp. 2204-2212, 2014.
[36] J.H. Kim, S.W. Lee, D. Kwak, M.O. Heo, J. Kim, J.W. Ha, and B.T. Zhang, “Multimodal residual learning for visual Qa,” In Advances in Neural Information Processing Systems, pp. 361-369, 2016.
[37] H. Noh, S. Hong, and B. Han, “Learning deconvolution network for semantic segmentation,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Santiago, Chile, pp. 1520–1528, 2015.
[38] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “Attention-based models for speech recognition,” in Neural Information Processing Systems (NeurIPS), pp. 577–585, 2015.
[39] L. Gao, X. Li, J. Song, and H. T. Shen, “Hierarchical LSTMs with adaptive attention for visual captioning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 5, pp.1112–1131, May 2020.
[40] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu, “Recurrent models of visual attention,” in Neural Information Processing Systems (NIPS). Red Hook, NY, USA: Curran, 2014, pp. 2204–2212.
[41] F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, and X. Tang, “Residual attention network for image classification,” in Proc. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017 [Online]. Available: https://arxiv.org/abs/1704.06904
[42] X. Nie, M. Duan, H. Ding, B. Hu, and E. K. Wong, “Attention mask R-CNN for ship detection and segmentation from remote sensing images,” IEEE Access, vol. 8, pp. 9325-9334, 2020.
[43] M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Columbus, OH, USA, 2014, pp. 1717–1724.
[44] K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. ICLR, San Diego, CA, USA, May 7-9, 2015, pp. 1-14. [Online]. Available: https://arxiv.org/abs/1409.1556.
[45] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A Survey on deep transfer learning,” in Proc. 27th ICANN, Rhodes, Greece, 4–7 October, pp. 270-279, 2018.
[46] Y. Guo, H. Shi, A. Kumar, K. Grauman, T. Rosing, and R. S. Feris, “SpotTune: Transfer learning through adaptive fine-tuning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 4805–4814.
[47] R. Zhang, H. Tao, L. Wu and Y. Guan, "Transfer learning with neural networks for bearing fault diagnosis in changing working conditions." IEEE Access 5 (2017): 14347-14357.
[48] J. Margeta, A. Criminisi, R. C. Lozoya, D. C. Lee, and N. Ayache, “Fine-tuned convolutional neural nets for cardiac MRI acquisition plane recognition,” Comput. Methods Biomech. Biomed. Eng. Imag. Visual., vol. 5, no. 5, pp. 339–349, 2016, doi: 10.1080/21681163.2015.1061448.
[49] W. Liu, M. Zhang, Z. Luo, and Y. Cai, “An ensemble deep learning method for vehicle type classification on visual traffic surveillance sensors,” IEEE Access, vol. 5, pp. 24417-24425, 2017.
[50] L. Rokach, “Ensemble-based classifiers,” Artif. Intell. Rev., vol. 33, no. 1–2, pp. 1–39, 2010.
[51] M. M. Fraz, P. Remagnino, A. Hoppe, B. Uyyanonvara, A. R. Rudnicka, C. G. Owen, S. A. Barman, “An ensemble classification-based approach applied to retinal blood vessel segmentation,” IEEE Trans. Biomed. Eng., vol. 59, no. 9, pp. 2538–2548, Sep. 2012.
[52] X. Liu, Z. Liu, G. Wang, Z. Cai, and H. Zhang, “Ensemble transfer learning algorithm,” IEEE Access, vol. 6, pp. 2389-2396, 2018.
[53] L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123-140, 1996.
[54] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of computer and system sciences, vol. 55, no. 1, pp. 119-139, 1997.
[55] D. H. Wolpert, “Stacked generalization,” Neural Networks, vol. 5, no. 2, pp. 241-259, 1992.
[56] S. Hochreiter, and J. Schmidhuber “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735-1780, 1997.
[57] J.H. Kim, S.W. Lee, D. Kwak, M.O. Heo, J. Kim, J.W. Ha, and B.T. Zhang, “Multimodal residual learning for visual Qa,” In Advances in Neural Information Processing Systems, pp. 361-369, 2016.
[58] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput. Assist. Interv. (MICCAI), Munich, Germany, pp. 234–241, 2015. [Online]. Available: https://arxiv.org/abs/1505.04597
[59] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc. Computer Vision and Pattern Recognition(CVPR), Boston, MA,USA,doi. 10.1109/CVPR.2015.7298965, 2015.
[60] T. Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proc. Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017. [Online]. Available: https://arxiv.org/abs/1612.03144
[61] S. Iizuka, E. S. Serra, and H. Ishikawa, “Globally and locally consistent image completion,” ACM Transactions on Graphics (TOC), vol. 36, no. 4, pp.107-115, 2017.
[62] K. Zhang, Y. Guo, X. Wang, J. Yuan, and Q. Ding, “Multiple feature reweight denseNet for image classification,” IEEE Access, vol. 7, pp. 9872-9880, 2019.
[63] Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process, 13(4):600-612, 2004.
[64] Dong, Qi, Shaogang Gong, and Xiatian Zhu. "Imbalanced deep learning by minority class incremental rectification." IEEE trans. on pattern analysis and machine intelligence, vol.41, no..6, pp. 1367-1381, 2018.
[65] G. Hu, X. Peng, Y. Yang, T. M. Hospedales, and J. Verbeek, “Frankenstein learning deep face representations using small data,” IEEE Trans. Image Process, vol. 27, no. 1, pp. 293-303, 2018.
[66] C. Qiu, S. Zhang, C. Wang, Z. Yu, H. Zheng, and B. Zheng, “Improving transfer learning and squeeze-and-excitation networks for small-scale fine-grained fish image classification,” IEEE Access, vol. 6, pp. 78503-78512, 2018.
[67] J. Wang, S. Li, B. Han, Z. An, H. Bao, and S. Ji, “Generalization of deep neural networks for imbalanced fault classification of machinery using generative adversarial networks,” IEEE Access, vol. 7, pp. 111168 -111180, 2019.
[68] D. Lee, S. Lee, H. Lee, K. Lee, and H.J. Lee, “Resolution-preserving generative adversarial networks for image enhancement,” IEEE Access, vol. 7, pp. 110344 -110357, 2019.
[69] Y. Yang, C. Hou, Y. Lang, G. Yue, and Y. He, “One-class classification using generative adversarial networks,” IEEE Access, vol. 7, pp. 37970 -37979, 2019.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *