|
[1] 行 政 院 主 計 總 處. (2022) 農 業 就 業 人 口 統 計. [Online]. Available: https://statview.coa.gov.tw/aqsys_on/importantArgiGoal_ lv3_1_6_2.html, accessed: 2022-10-5. [2] 行政院農委會. (2022) 農產品別 (coa) 資料查詢 ─ 按農產品 別. [Online]. Available:https://agrstat.coa.gov.tw/sdweb/public/trade/ TradeCoa.aspx, accessed: 2022-10-5. [3] G. Xia, J. Dan, H. Jinyu, H. Jiming, and S. Xiaoyong, “Research on fruit counting of xanthoceras sorbifolium bunge based on deep learning,” 2022 7th International Conference on Image, Vision and Computing (ICIVC), pp. 790–798, Xian, China, July 26-28, 2022. [4] H. Li, P. Wang, and C. Huang, “Comparison of deep learning methods for detecting and counting sorghum heads in uav imagery,” Remote Sensing, vol. 14, no. 13, p. 3143, 2022. [5] M. Buzzy, V. Thesma, M. Davoodi, and J. Mohammadpour Velni, “Real-time plant leaf counting using deep object detection networks,” Sensors, vol. 20, no. 23, p. 6896, 2020. [6] R. Heylen, P. Van Mulders, and N. Gallace, “Counting strawberry flowers on drone imagery with a sequential convolutional neural network,” 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, pp. 4880–4883, Brussels, Belgium, Jul 12-16, 2021. [7] 張肇熙,“深度學習應用於蘭花苗株自動化盤點系統",國立清華 大學動力機械工程學系碩士論文, 2022 年 7 月。 [8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017. [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation. arxiv e-prints, page,” arXiv preprint arXiv:1311.2524, 2013. [11] R. Girshick, “Fast r-cnn. arxiv 2015,” arXiv preprint arXiv:1504.08083, 2015. [12] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” Advances in neural information processing systems, vol. 28, 2015. [13] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788,Las Vegas, USA, Jun. 26 - Jul. 1, 2016. [14] W. Liu, D. E. Dragomir Anguelov, C. Szegedy, S. E. Reed, C.- Y. Fu, and A. C. Berg, “Ssd: single shot multibox detector. corr abs/1512.02325 (2015),” arXiv preprint arXiv:1512.02325, 2015. [15] M. Tan, R. Pang, and Q. Le, “Efficientdet: scalable and efficient object detection. arxiv,” arXiv preprint arXiv:1911.09070, vol. 10, 2019. [16] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger. arxiv 2016,” arXiv preprint arXiv:1612.08242, vol. 394, 2016. [17] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018. [18] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020. [19] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004. [20] D. Zhou and D. Hu, “A robust object tracking algorithm based on surf,” 2013 International Conference on Wireless Communications and Signal Processing, pp. 1–5, Hangzhou, China, Oct. 24-26, 2013. [21] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” 2011 International conference on computer vision, pp. 2564–2571, Barcelona, Spain, Nov. 06-13, 2011. [22] K. P. Win and Y. Kitjaidure, “Biomedical images stitching using orb feature based approach,” 2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), vol. 3, pp. 221–225, Bangkok, Thailand, Oct. 21-24, 2018. [23] S. A. K. Tareen and Z. Saleem, “A comparative analysis of sift, surf, kaze, akaze, orb, and brisk,” 2018 International conference on computing, mathematics and engineering technologies (iCoMET), pp. 1–10, Sukkur, Pakistan, Mar. 03-04, 2018. [24] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification and scene analysis. Wiley New York, 1973, vol. 3. [25] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981. [26] G. Bradski, “The opencv library.” Dr. Dobb’s Journal: Software Tools for the Professional Programmer, vol. 25, no. 11, pp. 120–123, 2000. [27] J. Redmon. (2013-2016) Darknet: Open source neural networks in c. [Online]. Available: https://pjreddie.com/darknet/, accessed: 2022-10-26. [28] M. Trajković and M. Hedley, “Fast corner detection,” Image and vision computing, vol. 16, no. 2, pp. 75–87, 1998. [29] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” European conference on computer vision, pp. 778–792, Heraklion, Crete, Greece, Sep. 05-11, 2010.
|