帳號:guest(3.144.98.0)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):林弘偉
作者(外文):Lin, Hong-Wei
論文名稱(中文):應用深度學習及影像拼接於溫室蝴蝶蘭苗株之盤點系統
論文名稱(外文):Inventory System for Orchid Seedlings in Greenhouse Using Deep Learning and Image Stitching
指導教授(中文):陳榮順
指導教授(外文):Chen, Rong-Shun
口試委員(中文):白明憲
陳宗麟
口試委員(外文):Bai, Ming-Sian
Chen, Tsung-Lin
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:110033590
出版年(民國):112
畢業學年度:111
語文別:中文
論文頁數:81
中文關鍵詞:蝴蝶蘭苗株自動盤點影像拼接物件辨識無人機
外文關鍵詞:Automatic Inventory of Orchid SeedlingsImage StitchingObject DetectionUAV
相關次數:
  • 推薦推薦:0
  • 點閱點閱:0
  • 評分評分:*****
  • 下載下載:7
  • 收藏收藏:0
蝴蝶蘭為台灣高經濟價值出口作物,因應管理排程上的要求,業者需每個月進行全面性溫室蝴蝶蘭苗株數量盤點工作。蝴蝶蘭擺放在溫室平面植床上,有不同大小與品種以及對應床號,在盤點時需花費大量人力與時間,再透過人工將盤點紀錄到紙本庫存表,以便業者進行二次盤點,並統計與初盤數字核對。因此,本研究以業者提供之實際溫室苗株場地的蝴蝶蘭苗株為應用對象,開發蝴蝶蘭苗株盤點系統。此盤點系統透過無人機在溫室內植床飛行依序拍攝,並藉由優化影像拼接演算法,有效地拼接成單一植床全景圖,再將苗株資訊及床號導入自定義二維碼中,透過二維碼將溫室內植床影像上的蝴蝶蘭苗株分類以及裁切,藉由物件辨識演算法以及影像處理之核心技術,實現蝴蝶蘭苗株盤點,最後整合苗株盤點數量以及苗株栽培資訊於苗株管理系統中。本研究所提出的方法分別在10張不同苗株植床重複性盤點並將結果與人工計數比較,其中,小、中、大苗盤點結果之平均準確率分別為94.68%、99.44%、97.07%。另外,分析苗株的平均遮蔽率,其中,小苗為2.38%,中苗為0.47%,大苗為2.81%,驗證本研究所提出無人機的溫室蘭花苗株自動盤點系統的可行性及有效性。
Orchids are highly valuable crops of export in Taiwan. In order to meet the requirements of management, the orchid industry usually needs to conduct a comprehensive monthly inventory of seedlings in greenhouse settings. These orchid seedlings are placed on flat plant trays which contain various sizes, cultivars, and corresponding bed numbers. The inventory process demands significant labor and time cost, involving manual counting that recorded in the paper inventory sheets for follow-up inventory. Therefore, this research develops an automatic counting system of orchid seedlings, provided in the actual greenhouse of orchid industry. This proposed system employs the camera embedded in a drone to sequentially capture images in a planting tray of greenhouse. Then the individually captured images are effectively stitched into a single panoramas image of a planting tray by optimizing an image stitching algorithms. The seedling information and bed numbers of plant trays are recorded into the custom QR code, placed in a plant tray on purpose, to facilitate the classification and segmentation of orchid seedlings within the greenhouse. In this research, object detection algorithm and image processing are the core technology of inventory system to realize the counting of orchid seedlings, and finally integrates the number of seedlings and seedling cultivation information within the seedling management system. The methods proposed in this research is applied to repetitive inventory checks on 10 distinct seedling plant trays, and the experimental results are compared with the manual counting. The average accuracy rates for counting small, medium, and large seedlings are 94.68%, 99.44%, and 97.07%, respectively. Furthermore, through an analysis, the average occlusion rates of orchid for small, medium, and large seedlings are 2.38%, 0.47%, and 2.81%, respectively. As a result, the developed automatic inventory of orchid seedlings by captured drone-based images is a feasible and effective system.
第一章 緒論 1
1.1 前言 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 研究動機及目的 . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 文獻回顧 . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 本文架構 . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
第二章 苗株盤點系統概述 19
2.1 苗株盤點系統流程設計 . . . . . . . . . . . . . . . . . . . 19
2.2 硬體設備 . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 軟體套件 . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.1 OpenCV . . . . . . . . . . . . . . . . . . . . . . . 24
2.3.2 Darknet . . . . . . . . . . . . . . . . . . . . . . . . 24
第三章 苗株盤點系統實現 25
3.1 資料收集與影像前處理 . . . . . . . . . . . . . . . . . . . 25
3.1.1 自動裁切 . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 影像拼接 . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1 ORB 特徵演算法 . . . . . . . . . . . . . . . . . . 27
3.2.2 特徵點匹配 . . . . . . . . . . . . . . . . . . . . . 33
3.2.3 匹配除錯 . . . . . . . . . . . . . . . . . . . . . . . 34
3.3 蘭花苗株資訊栽培系統 . . . . . . . . . . . . . . . . . . . 37
3.3.1 二維碼應用實現 . . . . . . . . . . . . . . . . . . . 39
3.3.2 植床拼接影像裁切 . . . . . . . . . . . . . . . . . 40
3.4 物件偵測 . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4.1 建立自動標註資料集 . . . . . . . . . . . . . . . . 41
3.4.2 YOLOv4 物件偵測演算法 . . . . . . . . . . . . . 43
第四章 苗株盤點系統研究結果與討論 45
4.1 影像拼接效能評估 . . . . . . . . . . . . . . . . . . . . . . 45
4.1.1 ORB 特徵提取演算法結果 . . . . . . . . . . . . . 45
4.1.2 特徵匹配結果 . . . . . . . . . . . . . . . . . . . . 47
4.1.3 影像拼接性能指標及實驗 . . . . . . . . . . . . . 50
4.2 物件偵測模型效能 . . . . . . . . . . . . . . . . . . . . . . 55
4.2.1 效能評估指標 . . . . . . . . . . . . . . . . . . . . 55
4.2.2 YOLOv4 模型效能分析 . . . . . . . . . . . . . . . 58
4.3 苗株盤點數量實驗評估 . . . . . . . . . . . . . . . . . . . 60
4.3.1 效能評估指標 . . . . . . . . . . . . . . . . . . . . 60
4.3.2 苗株盤點實驗效能分析 . . . . . . . . . . . . . . . 62
4.4 蘭花苗株資訊栽培系統實現 . . . . . . . . . . . . . . . . 72
第五章 結論與未來工作 75
5.1 結論 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.2 未來工作 . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
參考文獻 79
[1] 行 政 院 主 計 總 處. (2022) 農 業 就 業 人 口 統 計. [Online].
Available: https://statview.coa.gov.tw/aqsys_on/importantArgiGoal_
lv3_1_6_2.html, accessed: 2022-10-5.
[2] 行政院農委會. (2022) 農產品別 (coa) 資料查詢 ─ 按農產品
別. [Online]. Available:https://agrstat.coa.gov.tw/sdweb/public/trade/
TradeCoa.aspx, accessed: 2022-10-5.
[3] G. Xia, J. Dan, H. Jinyu, H. Jiming, and S. Xiaoyong, “Research
on fruit counting of xanthoceras sorbifolium bunge based on deep
learning,” 2022 7th International Conference on Image, Vision and
Computing (ICIVC), pp. 790–798, Xian, China, July 26-28, 2022.
[4] H. Li, P. Wang, and C. Huang, “Comparison of deep learning methods
for detecting and counting sorghum heads in uav imagery,” Remote
Sensing, vol. 14, no. 13, p. 3143, 2022.
[5] M. Buzzy, V. Thesma, M. Davoodi, and J. Mohammadpour Velni,
“Real-time plant leaf counting using deep object detection networks,”
Sensors, vol. 20, no. 23, p. 6896, 2020.
[6] R. Heylen, P. Van Mulders, and N. Gallace, “Counting strawberry
flowers on drone imagery with a sequential convolutional neural
network,” 2021 IEEE International Geoscience and Remote Sensing
Symposium IGARSS, pp. 4880–4883, Brussels, Belgium, Jul 12-16,
2021.
[7] 張肇熙,“深度學習應用於蘭花苗株自動化盤點系統",國立清華
大學動力機械工程學系碩士論文, 2022 年 7 月。
[8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based
learning applied to document recognition,” Proceedings of the IEEE,
vol. 86, no. 11, pp. 2278–2324, 1998.
[9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.
[10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature
hierarchies for accurate object detection and semantic segmentation.
arxiv e-prints, page,” arXiv preprint arXiv:1311.2524, 2013.
[11] R. Girshick, “Fast r-cnn. arxiv 2015,” arXiv preprint
arXiv:1504.08083, 2015.
[12] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” Advances in neural information processing systems, vol. 28, 2015.
[13] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788,Las Vegas, USA, Jun. 26 - Jul. 1, 2016.
[14] W. Liu, D. E. Dragomir Anguelov, C. Szegedy, S. E. Reed, C.-
Y. Fu, and A. C. Berg, “Ssd: single shot multibox detector. corr
abs/1512.02325 (2015),” arXiv preprint arXiv:1512.02325, 2015.
[15] M. Tan, R. Pang, and Q. Le, “Efficientdet: scalable and efficient object detection. arxiv,” arXiv preprint arXiv:1911.09070, vol. 10, 2019.
[16] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger. arxiv 2016,” arXiv preprint arXiv:1612.08242, vol. 394, 2016.
[17] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”
arXiv preprint arXiv:1804.02767, 2018.
[18] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4:
Optimal speed and accuracy of object detection,” arXiv preprint
arXiv:2004.10934, 2020.
[19] D. G. Lowe, “Distinctive image features from scale-invariant
keypoints,” International journal of computer vision, vol. 60, no. 2,
pp. 91–110, 2004.
[20] D. Zhou and D. Hu, “A robust object tracking algorithm based on
surf,” 2013 International Conference on Wireless Communications and
Signal Processing, pp. 1–5, Hangzhou, China, Oct. 24-26, 2013.
[21] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” 2011 International conference on computer vision, pp. 2564–2571, Barcelona, Spain, Nov. 06-13, 2011.
[22] K. P. Win and Y. Kitjaidure, “Biomedical images stitching using orb feature based approach,” 2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), vol. 3, pp. 221–225, Bangkok, Thailand, Oct. 21-24, 2018.
[23] S. A. K. Tareen and Z. Saleem, “A comparative analysis of sift,
surf, kaze, akaze, orb, and brisk,” 2018 International conference on
computing, mathematics and engineering technologies (iCoMET), pp.
1–10, Sukkur, Pakistan, Mar. 03-04, 2018.
[24] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification and scene analysis. Wiley New York, 1973, vol. 3.
[25] M. A. Fischler and R. C. Bolles, “Random sample consensus: a
paradigm for model fitting with applications to image analysis and
automated cartography,” Communications of the ACM, vol. 24, no. 6,
pp. 381–395, 1981.
[26] G. Bradski, “The opencv library.” Dr. Dobb’s Journal: Software Tools for the Professional Programmer, vol. 25, no. 11, pp. 120–123, 2000.
[27] J. Redmon. (2013-2016) Darknet: Open source neural networks
in c. [Online]. Available: https://pjreddie.com/darknet/, accessed:
2022-10-26.
[28] M. Trajković and M. Hedley, “Fast corner detection,” Image and vision computing, vol. 16, no. 2, pp. 75–87, 1998.
[29] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” European conference on computer vision, pp. 778–792, Heraklion, Crete, Greece, Sep. 05-11, 2010.



 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *