帳號:guest(18.221.61.135)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):林原賢
作者(外文):Lin, Yuan-Hsien
論文名稱(中文):應用卷積神經網路遷移學習與深度卷積對抗生成網路於產品瑕疵之分類
論文名稱(外文):Product Defect Classification using Convolutional Neural Network with Transfer Learning and Deep Convolutional Generative Adversarial Network
指導教授(中文):蘇朝墩
指導教授(外文):Su, Chao-Ton
口試委員(中文):陳隆昇
蕭宇翔
林家銘
口試委員(外文):Chen, Long-Sheng
Hsiao, Yu-Hsiang
Lin, Gu-Ming
學位類別:碩士
校院名稱:國立清華大學
系所名稱:工業工程與工程管理學系
學號:108034504
出版年(民國):110
畢業學年度:109
語文別:英文
論文頁數:56
中文關鍵詞:卷積神經網路遷移學習深度卷積對抗生成網路發光二極體板材瑕疵分類
外文關鍵詞:CNNtransfer learningDCGANLED lead frameDefect classification
相關次數:
  • 推薦推薦:0
  • 點閱點閱:328
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
隨著全球智慧製造的浪潮,人工智慧技術快速發展,工廠透過大量資料訓練機器學習模型,讓機器學習模型正確分類瑕疵是未來在品質管理方面值得努力發展的方向。然而,在實際應用中,常常面臨大型資料取得不易,以及每種瑕疵數量不同造成資料不平衡等…困境。為了解決此問題,本研究利用卷積神經網路、遷移學習及深度卷積對抗生成網路建立8種模型,用來辨別發光二極體正常板材和其他四種瑕疵板材:異物、汙染、凹痕、溢膠,並分別討論其模型準確度。最終經由比較分析,所選取的最佳模型為先經由深度卷積對抗生成網路生產圖片,進行資料擴增,使資料平衡後再使用ResNet50-V2的架構與預訓練之權重為基底,加上一層客製輸出全連接層進行瑕疵分類,該模型準確度可高達99.81%。此成果說明本研究所建立的模型,可有效判別發光二極體之瑕疵。
In the wave of global smart manufacturing, artificial intelligence technology develops rapidly, factories train machine learning models with large amounts of data and let models can correctly classify defects is a direction worthy to develop in the future. However, in real life applications, it is difficult to get large datasets and class imbalance problem effects the performance of the model. To solve this problem, this study uses convolutional neural networks (CNN), transfer learning and deep convolutional generative adversarial network (DCGAN) to build 8 models to classify normal LED lead frame and other four types of defects (foreign materials, pollution, dent, over glue) on LED lead frame and analyze the model accuracy for each model. Finally, after comparative analysis, the best model selected is to produce images through deep convolutional generative adversarial network to balance the datasets before using ResNet50-V2 as the base model with pre-trained weights and a custom output fully connected layer. The accuracy of the proposed model can be as high as 99.81%. The results show that using the model proposed by this study, it can classify defects on LED lead frame.
1. Introduction…………………………………………………………………………...9
1.1 Research Background and Motivation…………………………………………9
1.2 Research Purposes………………………………………………...…………10
1.3 Research Architecture…………………………………………….…………10
2. Literature Review……………………………………………………………………12
2.1 Convolutional Neural Network……………………………………………12
2.1.1 Concept of Convolutional Neural Network………………………… ...12
2.1.2 Application of Convolutional Neural Network…………………………15
2.2 Transfer Learning……………………………………………………………16
2.2.1 VGG16………………………………………………………………17
2.2.2 Inception V3……………………………………………………………18
2.2.3 ResNet50 V2…………………...………………………………………22
2.3 Generative Adversarial Network……………………………………...………26
2.3.1 Concept of Generative Adversarial Network……………………………26
2.3.2 Deep Convolutional Generative Adversarial Network…………………27
2.3.3 Application of Generative Adversarial Network……………………….28
3. Proposed Procedure ...………………………………………………………...……29
3.1 Problem Definition…………………………………………………………29
3.2 Data Preprocessing…………………………………………...………………30
3.3 Model Construction…………………………………...…...…………………31
3.3.1 Constructing Convolutional Neural Network……………………….31
3.3.2 Constructing Transfer Learning Network………………………………32
3.3.3 Constructing DCGAN……………………………………………32
3.4 Model Training and Evaluation…………………………………………….…32
4. Case Study………………………………………………………………...…………35
4.1 Problem Definition…………………………………………………...………35
4.2 Data preprocessing……………………………………………………...……37
4.3 Model Construction…………………………………………………………38
4.4 Model Training and Evaluation………………………………………….……39
4.5 Experiment Result…………………………………………………......…39
4.6 The Effectiveness of Using Proposed Procedure……………………………...50
5. Conclusion ………………………………………………………………….………51
5.1 Conclusion……………………………………………………………………51
5.2 Future study……………………………………………………………….….51
References………………………………………………………………………………52
[1] Acharya, U.R., Oh, S.L., Hagiwara, Y., Tan, J., Adam, M., Gertych A., Tan, A., (2017). A deep convolutional neural network model to classify heartbeats, Computers in Biology and Medicine, 89, 389-396
[2] Cortes, C., Vapnik, V. (1995). Support-vector networks. Machine Learning 20, 273–297.
[3] Cheon, S., Lee, H., Kim, C. O., & Lee, S. H. (2019). Convolutional Neural
Network for Wafer Surface Defect Classification and the Detection of Unknown
Defect Class. IEEE Transactions on Semiconductor Manufacturing, 32(2), 163-
170. doi:10.1109/TSM.2019.2902657
[4] Frid-Adar, M., Klang, E., Amitai, M., Goldberger J. and Greenspan H., "Synthetic data augmentation using GAN for improved liver lesion classification," 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, 2018, pp. 289-293, doi: 10.1109/ISBI.2018.8363576. 444
[5] Glorot, X, Bordes, A, Bengio, Y. (2011). Deep Sparse Rectifier Neural Networks. In International Conference on Artificial Intelligence and Statistics, 315-323
[6] Girshick, R., Donahue, J., Darrell, T., Malik, J. (2014). Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 580-587
[7] Goodfellow, I. J., Pouget-Abadie, J., Mirza M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. NIPS, 2014.
[8] He, K., Gkioxari, G., Dollár, P., and Girshick, R. "Mask R-CNN," 2017 IEEE International Conference on Computer Vision (ICCV), Venice, 2017, pp. 2980-2988, doi: 10.1109/ICCV.2017.322.
[9] Hertel, L., Barth, E., Käster, T., and Martinetz T. "Deep convolutional neural networks as generic feature extractors," 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, 2015, pp. 1-4, doi: 10.1109/IJCNN.2015.7280683.
[10] Hendrycks, D., Lee, K., Mazeika, M. (2019). Using Pre-Training Can Improve Model Robustness and Uncertainty, In International Conference on Machine Learning.
[11] He, K., Zhang, X., Ren, S., Sun, J. Deep Residual Learning for Image Recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778
[12] He K., Zhang X., Ren S., Sun J. (2016) Identity Mappings in Deep Residual Networks. In: Leibe B., Matas J., Sebe N., Welling M. (eds) Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol 9908. Springer, Cham. https://doi.org/10.1007/978-3-319-46493-0_38Tang, Z., & Fishwick, P. A. (1993). Feedforward neural nets as models for time series forecasting. ORSA journal on computing, 5(4), 374-385.
[13] Huang, G., Liu, Z., L. van der M., Weinberger, K. Q.: Densely Connected Convolutional Networks Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4700-4708
[14] H.-P. Lu and C.-T. Su (2021), “CNNs Combined with a Conditional GAN for Mura Defect Classification in TFT-LCDs,” IEEE Transactions on Semiconductor Manufacturing, Vol. 34, No. 1, pp. 25-33.
[15] H. -P. Lu, C. -T. Su, S. -Y. Yang and Y. -P. Lin, "Combination of Convolutional and Generative Adversarial Networks for Defect Image Demoiréing of Thin-Film Transistor Liquid-Crystal Display Image," in IEEE Transactions on Semiconductor Manufacturing, vol. 33, no. 3, pp. 413-423, Aug. 2020
[16] Ioffe, S.,and Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pages 448–456, 2015.
[17] Kido, S., Hirano, Y., Hashimoto, N. (2018, 7-9 Jan. 2018). Detection and
classification of lung abnormalities by use of convolutional neural network
(CNN) and regions with CNN features (R-CNN). Paper presented at the 2018
International Workshop on Advanced Image Technology (IWAIT).
[18] Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). ImageNet classification
with deep convolutional neural networks. Advances in Neural Information
Processing Systems, 25(2), 1097-1105.
[19] Kingma, D. P., and Ba, J. L. Adam: A method for stochastic optimization. Proceedings of the 3rd International Conference on Learning Representations (ICLR) arXiv preprint arXiv:1412.6980, 2014.
[20] Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
[21] McCulloch, W.S., Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5, 1467-1470.
[22] Mirza, M., and Osindero, S. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014
[23] Nguyen, T.P., Choi, S., Park, SJ. (2021). Inspecting Method for Defective Casting Products with Convolutional Neural Network (CNN). Int. J. of Precis. Eng. and Manuf.-Green Tech. 8, 583–594
[24] Radford, A., Metz, L., Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, ICLR, 1-16, 2016.
[25] Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408.
[26] Srinath S. Kumar, Dulcy M. Abraham, Mohammad R. Jahanshahi, Tom Iseley, Justin Starr (2018). Automated defect classification in sewer closed circuit television inspections using deep convolutional neural networks, Automation in Construction, Volume 91, 273-283
[27] Shi, J., Li, Z., Zhu, T., Wang, D., Ni, C. (2020). Defect Detection of Industry Wood Veneer Based on NAS and Multi-Channel Mask R-CNN. Sensors, 20, 4398.
[28] Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for
large-scale image recognition. ICLR, 2015
[29] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z. Rethinking the Inception Architecture for Computer Vision, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818-2826
[30] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich. A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015.
[31] Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
[32] Tieleman, T. and Hinton, G. (2012). COURSERA: Neural Networks for Machine
Learning. Lecture 6.5 – RMSProp.
[33] Wang G., Kang W., Wu, Q., Wang Z. and J. Gao. "Generative Adversarial Network (GAN) Based Data Augmentation for Palmprint Recognition," 2018 Digital Image Computing: Techniques and Applications (DICTA), Canberra, Australia, 2018, pp. 1-7, doi: 10.1109/DICTA.2018.8615782.
[34] Waheed, A., Goyal, M., Gupta, D., Khanna, A., Al-Turjman F. and Pinheiro, P. R. "CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection," in IEEE Access, vol. 8, pp. 91916-91923, 2020, doi: 10.1109/ACCESS.2020.2994762.
[35] Yang, J., Yu, K., Gong, Y., and Huang, T. (2009). Linear spatial pyramid matching
using sparse coding for image classification, IEEE Conference on Computer
Vision and Pattern Recognition, Miami, FL, USA, 1794-1801
[36] Zhang, Yu., Satapathy, S., Guttery, D., Górriz, J., Wang, S., (2021)
Improved Breast Cancer Classification Through Combining Graph Convolutional Network and Convolutional Neural Network,Information Processing & Management,Volume 58, Issue 2,


 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *