帳號:guest(3.15.198.92)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):陳文聖
作者(外文):Chen, Wen­-Sheng
論文名稱(中文):應用深度學習及機器視覺於蘭花表型辨識系統
論文名稱(外文):Orchid Phenotype Detecting System Using Deep Learning Algorithm and Machine Vision
指導教授(中文):陳榮順
指導教授(外文):Chen, Rong-Shun
口試委員(中文):白明憲
陳宗麟
口試委員(外文):Bai, Ming-Sian
Chen, Tsung-Lin
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:109033529
出版年(民國):111
畢業學年度:110
語文別:中文
論文頁數:90
中文關鍵詞:物件辨識機器視覺時間序列分類雙目視覺
外文關鍵詞:Object detectionMachine visionTime series forest classifierBinocular vision
相關次數:
  • 推薦推薦:0
  • 點閱點閱:86
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
蝴蝶蘭為台灣最大宗的出口花卉,出貨時需以人力根據蝴蝶蘭表型進行分級。然而,近幾年來台灣農業人口明顯下降,加上高齡化的現象,使得農業人力出現供不應求的情形。因此,本研究目的為使用深度學習和機器視覺於蝴蝶蘭的出貨過程,解決人力不足的問題並提高分級準確率,提升台灣蝴蝶蘭產業競爭力。本研究分為兩部分,在進行花朵數量估計時,以旋轉開花株的影片進行物件辨識,每隔固定角度 θ 取得花蕊數量,依序排列形成一時間序列,再使用時間序列森林分類器對此時間序列進行分類,藉以獲得花朵數量。而在花朵尺寸測量時,先進行影像處理,自影像中分離花朵,透過演算法找出特徵點與其垂足的像素座標,並使用雙目視覺將其轉換為空間座標,最後計算特徵點間、特徵點與垂足間的距離,得到該花朵尺寸。研究結果如下:在花朵數量估計中,辨識花蕊物件辨識模型之F1­-score = 98.20%,時間序列森林分類器在 3 ∼ 6 朵花朵、θ = 10◦ 時預測準確率達到最高,為 99.48%,預測平均誤差為 0.0052 朵。而在花朵尺寸測量中,使用 16 朵平均尺寸為 9.47 公分的真實花朵,分別以兩端點、
左端點與右端點等方式測量,平均絕對誤差分別為 −0.59 公分、0.22公分及 0.35 公分,平均相對誤差分別為 −6.19%、2.36% 及 3.77%。因此,具有相當高的分級準確率。
In Taiwan, Orchids are the largest export flower, which need to be manually graded according to their phenotype when they are exported for sale. However, in the past years, the agricultural labor market has experienced seriously short supply because of the decline and aging in population of Taiwan. This study aims to solve the labor shortage and to improve the quality of orchid classification, before orchids are sold, through using deep learning and machine vision to count the flower number and to measure the size of one flower. Therefore, the production of orchids can be upgraded. In order to count the number of flowers, the video of the rotating orchids will be
used for object detection. The number of stamens will be obtained for every θ angle, arranged to form a time series. The Time Series Forest Classifier is employed to classify this time series and to find the number of flowers. For measuring the size of flowers, the image processing is performed first to separate the flowers from the image. Then the coordinates of the feature points and their perpendicular feet are found through an algorithm. These pixel coordinates are converted into spatial coordinates using binocular vision. The distances between the feature points and between the feature point
and their perpendicular feet are calculated to find the size of the flower. In the study of counting the number of flowers, the F1­-score of object detection model that identifies the stamen is 98.20%. This model generates a time series dataset of which the accuracy of predicting the number of flowers is 99.48% when θ = 10◦ using Time Series Forest Classifier, and the average |Dic| is 0.0052. In the flower size measurements, 16 flowers with an average size of 9.47 cm are measured by three methods. The average absolute errors are ­0.59 cm, 0.22 cm, and 0.35 cm, respectively, while the average relative errors are ­6.19%, 2.36%, and 3.77%, respectively.
摘要 i
Abstract ii
誌謝
圖目錄 iii
表目錄 vi
第一章 緒論 1
1.1 前言 1
1.2 研究動機及目的 3
1.3 文獻回顧 4
1.4 本文架構 17
第二章 表型辨識系統概述 19
2.1 系統流程 19
2.2 硬體設備 21
2.3 軟體套件 24
第三章 表型辨識系統實現 25
3.1 花朵數量估計 25
3.1.1 物件辨識 26
3.1.2 時間序列 27
3.1.3 時間序列森林分類法 31
3.2 花朵尺寸測量 31
3.2.1 感測器選擇 32
3.2.2 雙目視覺 33
3.2.3 特徵點獲取與花朵尺寸測量 36
第四章 實驗結果與討論 43
4.1 花朵數量估計結果 43
4.1.1 YOLOv4 43
4.1.2 Time Series Forest Classifier 52
4.2 花朵尺寸測量結果 73
4.2.1 深度相機影像 73
4.2.2 花朵尺寸測量結果 73
第五章 結論與未來工作 85
5.1 結論 85
5.1.1 花朵數量估計 85
5.1.2 花朵尺寸測量 87
5.2 未來工作 88
參考文獻 89
[1] 行政院農委會. (2021) 農產品別 (coa) 資料查詢 ─ 按農產品別. [Online]. Available:https://agrstat.coa.gov.tw/sdweb/public/trade/TradeCoa.aspx, accessed: 2021­12­07.
[2] 行政院主計總處. (2021) 農業就業人口統計. [Online]. Available: https://statview.coa.gov.tw/aqsys_on/index.html, accessed: 2021­12­07.
[3] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real­time object detection,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, Las Vegas, USA, Jun. 26 ­- Jul. 1, 2016.
[4] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7263–7271, Hawaii, USA, Jul. 22 ­ 25, 2017.
[5] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”arXiv preprint arXiv:1804.02767, 2018.
[6] A. Bochkovskiy, C.­Y. Wang, and H.­Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
[7] L. Xu, Y. Li, Y. Sun, L. Song, and S. Jin, “Leaf instance segmentation and counting based on deep object detection and segmentation networks,” 2018 Joint 10th International Conference on Soft Computing and Intelligent Systems (SCIS) and 19th International Symposium on Advanced Intelligent Systems (ISIS), pp. 180–185, Toyama, Japan, Dec. 05 - 08, 2018.
[8] M. K. Vishal, B. Banerjee, R. Saluja, D. Raju, V. Chinnusamy, S. Kumar, R. N. Sahoo, and J. Adinarayana, “Leaf counting in rice (oryzasativa l.) using object detection: A deep learning approach,” IGARSS 2020 ­ 2020 IEEE International Geoscience and Remote Sensing Symposium, pp. 5286–5289, Hawaii, USA, Sep. 26 ­ Oct. 02, 2020.
[9] Y.­L. Tu, W.­Y. Lin, and Y.­C. Lin, “Automatic leaf counting using improved yolov3,” 2020 International Symposium on Computer, Consumer and Control (IS3C), pp. 197–200, Taichung, Taiwan, Nov. 13-16, 2020.
[10] R. Heylen, P. van Mulders, and N. Gallace, “Counting strawberry flowers on drone imagery with a sequential convolutional neural network,”2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, pp. 4880–4883, Brussels, Belgium, Jul. 12-16, 2021.
[11] J. Liu, Y. Liu, and J. Doonan, “Point cloud based iterative segmentation technique for 3d plant phenotyping,” 2018 IEEE International Conference on Information and Automation (ICIA), pp. 1072–1077, Fujian, China, Aug. 11-13, 2018.
[12] H. F. Murcia, D. A. Sanabria, Dehyro­Méndez, and M. G. Forero, “Development of a simulation tool for 3d plant modeling based on 2d lidar sensor,” 2020 Virtual Symposium in Plant Omics Sciences (OMICAS), pp. 1–6, Bogotá, Colombia, Nov. 23-27, 2020.
[13] L. Li, Q. Zhang, and D. Huang, “A review of imaging techniques for plant phenotyping,” Sensors, vol. 14, no. 11, pp. 20 078–20 111, 2014.
[14] Z. Wang, K. B. Walsh, and B. Verma, “On­tree mango fruit size estimation using rgb­d images,” Sensors, vol. 17, no. 12, 2017.
[15] A. Vit and G. Shani, “Comparing rgb­d sensors for close range outdoor agricultural phenotyping,” Sensors, vol. 18, no. 12, 2018.
[16] A. Vit, G. Shani, and A. Bar­Hillel, “Length phenotyping with interest point detection,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 2609–2618, Long Beach, USA, Jun. 16 ­ 17, 2019.
[17] 國家教育研究院. (2021) 雙語詞彙、學術名詞暨辭書資訊網. [Online]. Available: http://terms.naer.edu.tw/detail/1306984/, accessed: 2021­12­13.
[18] 楊杰穎,” 應用深度學習之蘭花品質分級自動化辨識系統”,國立清華大學動力機械工程學系碩士論文, 2020 年 2 月。
[19] 紀程允,” 應用卷積神經網路於蘭花品質分級與病害辨識系統”,國立清華大學動力機械工程學系碩士論文, 2021 年 7 月。
[20] L. Keselman, J. I. Woodfill, A. Grunnet­Jepsen, and A. Bhowmik, “Intel(r) realsense(tm) stereoscopic depth cameras,” 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1267–1276, Honolulu, USA, Jun. 16 ­ 17, 2017.
[21] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *