帳號:guest(3.14.128.50)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):紀程允
作者(外文):Chi, Cheng-Yun
論文名稱(中文):應用卷積神經網路於蘭花品質分級與病害辨識系統
論文名稱(外文):Orchid Quality Grading and Disease Recognition System using Convolution Neural Network
指導教授(中文):陳榮順
指導教授(外文):Chen, Rong-Shun
口試委員(中文):陳宗麟
白明憲
口試委員(外文):Chen, Tsung-Lin
Bai, Ming-Sian
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:108033539
出版年(民國):110
畢業學年度:109
語文別:中文
論文頁數:88
中文關鍵詞:蘭花苗株病蟲害蘭花品質分級深度學習影像辨識圖像分類物件偵測
外文關鍵詞:Orchid Seedling DiseaseOrchid Quality GradingDeep LearningImage RecognitionImage ClassificationObject Detection
相關次數:
  • 推薦推薦:0
  • 點閱點閱:272
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
本研究致力於蘭花苗株病害辨識與蘭花品質分級辨識系統之開發。全程研究共分成兩個階段,第一階段為蘭花苗株病害辨識,首先從合作之蘭花公司種植園區蒐集實際拍攝蝴蝶蘭苗株影像,並且依據蝴蝶蘭苗株不同病癥種類之影像分類建檔,建立蘭花苗株影像資料庫。本研究使用ResNet 50與Inception v3兩種圖像分類演算法,將建檔完成後的資料庫進行卷積神經網路之深度學習訓練,利用訓練完成後之影像辨識模式批次辨識蘭花苗株之病蟲害種類。再利用物件偵測演算法(YOLOv4)訓練已經標註的影像資料,因此,能夠於檢測時辨識蘭花苗株影像之病變特徵。本研究第二階段為蘭花品質分級,研究方法與第一階段視覺辨識類似,同樣利用深度學習與影像辨識的方法,辨識蘭花開花株的花色、瑕疵、花苞等項目,以利將來合作廠商所訂出的蘭花分級規則,研發蘭花開花株等級的智慧化、自動化分級系統。本研究系統不僅提供蝴蝶蘭業者在苗株育種階段,針對病蟲害進行篩檢,也可以在蘭花開花株銷售前之自動化分級,節省人力並提高辨識能力。
This research is aimed to the development of seedling disease recognition and quality grading systems for orchid. In the first part, the images of orchid seedlings in the plantation area of the cooperating orchid company are collected, and the dataset of orchid seedling images, based on the classification of the seedlings for different disease types, is established. Two Convolutional Neural Network (CNN) architectures, ResNet 50 and Inception v3, are utilized to perform deep learning training through the collected images from the dataset. As a result, the trained model from both ResNet 50 and Inception v3 can recognize the types of diseases and insect pests of orchid seedlings in batches. In addition, the object detection algorithm, YOLO v4, is used to train the labeled image data to extract the pathological characteristics of orchid seedling images during detection. The study of the second part is to develop an intelligent and automated system for orchid quality grading. Similar to the first part, the methods of artificial intelligence, deep learning, and image recognition is also employed to recognize the color and of defect flowers, and the numbers of flowers and buds. Therefore, the developed system can not only provide the recognition of orchid diseases and insect pests, but also can quickly identify the color and defect of flowers by deep learning.
摘要 I
Abstract II
致謝 III
圖目錄 VII
表目錄 X
第一章 緒論 1
1.1 前言 1
1.2 研究動機及目標 2
1.3 文獻回顧 3
1.3.1 智慧農業之影像應用相關研究 3
1.3.2 卷積神經網路相關論文 13
1.3.3 YOLO系列 17
1.4 本文架構 21
第二章 蘭花分級與辨識系統概述 22
2.1 硬體設備 22
2.1.1 取像機構與設備 22
2.1.2 電腦設備 24
2.2 軟體套件 25
2.3 蘭花苗株病變病癥 28
2.4 蘭花開花株分等依據 31
2.5 蘭花自動化辨識與分級系統流程 34
第三章 蘭花分級與辨識系統實現 35
3.1 蘭花苗株影像 35
3.2 蘭花開花株影像 36
3.3 蘭花影像物件標註 36
3.4 影像辨識系統 38
3.4.1 卷積神經網路之圖像分類 38
3.4.2 優化卷積神經網路模型之方法 40
3.4.3 物件偵測演算法 42
第四章 實驗結果 43
4.1 蘭花資料庫 43
4.1.1 蘭花苗株病變分類資料集 43
4.1.2 蘭花苗株病變框選資料集 44
4.1.3 蘭花開花株花朵與花苞資料集 44
4.1.4 蘭花開花株瑕疵資料集 46
4.2 影像辨識模型性能評估指標 47
4.3 蘭花苗株病蟲害辨識結果 54
4.4 蘭花開花株花朵與花苞辨識結果 68
4.5 蘭花開花株花朵瑕疵辨識結果 74
4.6 辨識模型與使用者介面系統整合 79
第五章 結論與未來工作 82
5.1 結論 82
5.2 未來工作 83
參考文獻 85
[1] https://www.ndc.gov.tw/Content_List.aspx?n=3CF120A42CD31054, July, 2021.
[2] S. R. Dubey and A. S. Jalal, "Adapted approach for fruit disease identification using images," Image processing: Concepts, methodologies, tools, and applications: IGI Global, 2013, pp. 1395-1409.
[3] H. Sabrol and K. Satish, "Tomato plant disease classification in digital images using classification tree," 2016 International Conference on Communication and Signal Processing (ICCSP), 6-8 April 2016 2016, pp. 1242-1246, doi: 10.1109/ICCSP.2016.7754351.
[4] P. B. Padol and A. A. Yadav, "SVM classifier based grape leaf disease detection," 2016 Conference on Advances in Signal Processing (CASP), 9-11 June 2016 2016, pp. 175-179, doi: 10.1109/CASP.2016.7746160.
[5] R. M. Prakash, G. P. Saraswathy, G. Ramalakshmi, K. H. Mangaleswari, and T. Kaviya, "Detection of leaf diseases and classification using digital image processing," 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), 17-18 March 2017 2017, pp. 1-4, doi: 10.1109/ICIIECS.2017.8275915.
[6] J. Shijie, J. Peiyi, H. Siping, and L. Haibo, "Automatic detection of tomato diseases and pests based on leaf images," 2017 Chinese Automation Congress (CAC), 20-22 Oct. 2017 2017, pp. 2537-2510, doi: 10.1109/CAC.2017.8243388.
[7] H. Durmuş, E. O. Güneş, and M. Kırcı, "Disease detection on the leaves of the tomato plants by using deep learning," 2017 6th International Conference on Agro-Geoinformatics, 7-10 Aug. 2017 2017, pp. 1-5, doi: 10.1109/Agro-Geoinformatics.2017.8047016.
[8] D. I. Swasono, H. Tjandrasa, and C. Fathicah, "Classification of Tobacco Leaf Pests Using VGG16 Transfer Learning," 2019 12th International Conference on Information & Communication Technology and System (ICTS), 18-18 July 2019 2019, pp. 176-181, doi: 10.1109/ICTS.2019.8850946.
[9] A. Fuentes, S. Yoon, S. C. Kim, and D. S. Park, "A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition," Sensors, vol. 17, no. 9, p. 2022, 2017.
[10] Q.-K. Huynh, C.-N. Nguyen, H.-P. Vo-Nguyen, H.-T. Le, and V.-C. Nguyen, "Identification of the Damages Caused by Diseases on Fresh Destemmed Chili Fruits," 2020 12th International Conference on Knowledge and Systems Engineering (KSE), 2020: IEEE, pp. 126-130.
[11] H. S. Choi, J. B. Cho, S. G. Kim, and H. S. Choi, "A real-time smart fruit quality grading system classifying by external appearance and internal flavor factors," 2018 IEEE International Conference on Industrial Technology (ICIT), 2018: IEEE, pp. 2081-2086.
[12] A. Morbekar, A. Parihar, and R. Jadhav, "Crop Disease Detection Using YOLO," 2020 International Conference for Emerging Technology (INCET), 5-7 June 2020 2020, pp. 1-5, doi: 10.1109/INCET49848.2020.9153986.
[13] 林采蓉,"基於深度學習之自動化蘭花苗株病變視覺辨識系統",國立清華大學動力機械系碩士論文,2019年.
[14] 楊杰穎,"應用深度學習之蘭花品質分級自動化辨識系統",國立清華大學動力機械系碩士論文,2020年.
[15] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998, doi: 10.1109/5.726791.
[16] A. Krizhevsky, I. Sutskever, and G. Hinton, "ImageNet Classification with Deep Convolutional Neural Networks," Neural Information Processing Systems, vol. 25, 01/01 2012, doi: 10.1145/3065386.
[17] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," International Conference on Learning Representations, 2015.
[18] https://neurohive.io/en/popular-networks/vgg16/, July, 2021.
[19] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich., "Going deeper with convolutions," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 7-12 June 2015 2015, pp. 1-9, doi: 10.1109/CVPR.2015.7298594.
[20] K. He, X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27-30 June 2016 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90.
[21] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, "Object Detection with Discriminatively Trained Part-Based Models," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1627-1645, 2010, doi: 10.1109/TPAMI.2009.167.
[22] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
[23] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[24] J. Redmon and A. Farhadi, "YOLO9000: better, faster, stronger," Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263-7271.
[25] J. Redmon and A. Farhadi, "Yolov3: An incremental improvement," CoRR, April, 2018. (https://arxiv.org/pdf/1804.02767.pdf)
[26] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, "Yolov4: Optimal speed and accuracy of object detection," arXiv preprint arXiv:2004.10934, 2020.
[27] https://www.flir.com/iis/machine-vision/, July, 2021.
[28] <2012_PENTAX_MV_eng.pdf>. Available:http://www.ricoh-imaging.co.jp/english/products/catalog/pdf/2012_PENTAX_MV_eng.pdf, July, 2021.
[29] https://www.nvidia.com/zh-tw/geforce/graphics-cards/rtx-2080-ti/, July, 2021.
[30] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818-2826.
[31] M. Grandini, E. Bagli, and G. Visani, "Metrics for Multi-Class Classification: an Overview," ArXiv, vol. abs/2008.05756, 2020.
[32] https://github.com/AlexeyAB/darknet, July, 2021
[33] https://sanchom.wordpress.com/2011/09/01/precision-recall/, July, 2021
[34] M. Everingham, L. Gool, C. K. Williams, J. Winn, and A. Zisserman, "The Pascal Visual Object Classes (VOC) Challenge," International Journal of Computer Vision, vol. 88, pp. 303-338, 2009.

 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *