帳號:guest(216.73.216.88)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):張皓恩
作者(外文):Chang, Hao-En
論文名稱(中文):結合物件偵測、取物姿態估測以及夾取力量補償之六軸機械手臂取放系統
論文名稱(外文):Integration of Object Detection, Grasp Pose Estimation and Gripping Force Compensation for a Six-DoF Robotic Arm Pick-and-Place System
指導教授(中文):陳榮順
指導教授(外文):Chen, Rong-Shun
口試委員(中文):張禎元
王偉誠
口試委員(外文):Chang, Jen-Yuan
Wang, Wei-Cheng
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:110033627
出版年(民國):112
畢業學年度:111
語文別:中文
論文頁數:112
中文關鍵詞:機械手臂物件偵測取物姿態PointNet++壓力感測器
外文關鍵詞:Robotic ArmObject DetectionGrasp PosePointNet++Force Sensing Resistor
相關次數:
  • 推薦推薦:0
  • 點閱點閱:124
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
機械手臂大多應用在產業的製造或組裝工作,透過預先設定好的參數驅動機械手臂在空間中移動,再以手臂的端效器於特定物件上進行加工或取放。然而近年來機械手臂的應用更加廣泛,包括導入於生活中,意味著即使只是取放物件,機械手臂每次作動可能需要取放不同大小、形狀或材料多樣化的物件,所以如何辨識及定位物件、估測合適之取物姿態以及在取物的過程中不至於破壞物體或使其掉落,是重要的研究課題之一。

本研究首先使用Mask R-CNN物件偵測演算法進行物件辨識與定位,得到物體輪廓,同時結合深度相機的深度資訊,轉換成點雲。而後透過演算法從該物體點雲外觀,判斷是否適合以本研究使用的吸盤吸取。若計算結果為不適合,則將物體點雲輸入至以PointNet++為基礎之深度神經網路模型,結合適當的損失函數進行訓練,端到端地預測夾取的姿態。此外,為了達到穩定夾取,本研究以八個壓力感測器組合成陣列式,透過接觸力量資訊的回饋,判斷是否有因施力不足而導致夾取物件的滑動,並視需要補償夾取力量。本研究以六軸機械手臂進行物件取放實驗,並加入了手眼校正以及運動軌跡規劃,針對木球、酒精瓶、紙杯、盒子和木塊等物體進行共100次實驗,取放成功率達92%,驗證本研究所提出之機械手臂物件取放系統的實用性。
The application of robotic arms has traditionally been limited to industrial settings, where they are programmed with predefined parameters for object processing and handling. However, in recent years, there has been a growing trend of integrating robotic arms into daily life scenarios. This trend requires the ability to handle objects with diverse sizes, shapes, and materials. Therefore, it is crucial to address the important research topics of object localization, grasp pose estimation and the secure handling of objects to prevent damage or dropping during the pick-and-place process.

This study first employs the Mask R-CNN object detection algorithm to locate the object, and to obtain the mask. Then, the mask is combined with the depth image to generate a point cloud. Subsequently, an algorithm is proposed to determine whether the object is suitable for vacuum gripper grasping. For an unsuitable case, the point cloud is fed into a PointNet++ model, which utilizes geodesic distance as the loss function to predict a grasp pose for the parallel gripper in an end-to-end manner. Additionally, to achieve stable grasping, this study arranges eight FSRs in an array configuration. By analyzing contact force information, it can detect any slippage, caused by insufficient applied force, and further compensates the gripping force. To enhance the performance of the proposed object pick-and-place system, a hand-eye calibration and a motion trajectory planning are performed. Finally, the system is tested on a six-dof robotic arm, involving objects such as ball, bottle, paper cup, box and wooden block. The system achieves a success rate of 92% in object pick-and-place over 100 experiments.
摘要 i
Abstract ii
誌謝 iii
目錄 iv
圖目錄 vi
表目錄 ix
第一章 緒論 1
1.1 前言 1
1.2 研究動機及目的 2
1.3 文獻回顧 4
1.3.1 物件定位 4
1.3.2 取物姿態估測 7
1.3.3 滑動偵測 11
1.4 本文架構 13
第二章 機械手臂實驗系統概述 15
2.1 硬體設備及元件 15
2.2 軟體套件 19
第三章 物件偵測和取物姿態系統 21
3.1 物件偵測模型 21
3.2 物體點雲建立 24
3.3 吸取判斷演算法 27
3.4 夾取姿態模型 31
3.4.1 四元數 32
3.4.2 損失函數設計 35
3.5 訓練資料集準備 38
3.5.1 資料蒐集流程 38
3.5.2 點雲資料增廣 41
3.6 夾取姿態模型訓練結果與討論 43
3.7 本章小結 48
第四章 力量補償系統 49
4.1 壓力感測器校正 51
4.2 滑動偵測 58
4.3 夾取拉伸實驗 60
4.3.1 壓力感測器陣列排列形式 60
4.3.2 穩定拉伸實驗 60
4.3.3 位移滑動拉伸實驗 62
4.3.4 旋轉滑動拉伸實驗 68
4.4 力量補償系統流程及實驗 69
4.5 吸盤系統 77
4.6 本章小結 78
第五章 取物系統整合與實驗結果 79
5.1 手眼校正 79
5.2 運動軌跡規劃 85
5.3 機械手臂取物系統流程 87
5.4 取物成功率實驗 88
5.5 本章小結 96
第六章 結論與未來工作 97
6.1 結論 97
6.2 未來工作 98
參考文獻 99
附錄A 壓力感測器曲線擬合結果 105
[1] Omron Mobile Manipulator. [Online]. Available: https://industrial.omron.eu/en/solutions/product-solutions/omron-mobile-manipulator-solution, accessed: 2022-11-01.
[2] Moxi Mobile Manipulator. [Online]. Available: https://www.diligentrobots.com/moxi, accessed: 2022-11-01.
[3] Frankfurt. (2022) World Robotics 2022–Service Robots report. [Online]. Available: https://ifr.org/ifr-press-releases/, accessed: 2022-11-01.
[4] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,”Advances in neural information processing systems, vol. 28, 2015.
[5] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788, Las Vegas, Nevada, June 26 - July 1, 2016.
[6] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
[7] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 9, pp. 1904–1916, 2015.
[8] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp.8759–8768, Salt Lake City, Utah, June 18 - 22, 2018.
[9] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
[10] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” Proceedings of the IEEE international conference on computer vision, pp. 2961–2969, Venice, Italy, October 22 - 29, 2017.
[11] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440, Boston, USA, June 7 - 15, 2015.
[12] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–660, Honolulu, USA, July 21 - 26, 2017.
[13] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” Advances in neural information processing systems, vol. 30, 2017.
[14] A. Mousavian, C. Eppner, and D. Fox, “6-dof graspnet: Variational grasp generation for object manipulation,” Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2901– 2910, Seoul, Korea, October 27 - November 2, 2019.
[15] A. ten Pas, M. Gualtieri, K. Saenko, and R. Platt, “Grasp pose detection in point clouds,” The International Journal of Robotics Research, vol. 36, no. 13-14, pp. 1455–1473, 2017.
[16] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” arXiv preprint arXiv:1703.09312, 2017.
[17] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” The International journal of robotics research, vol. 37, no. 4-5, pp. 421–436, 2018.
[18] P. Schmidt, N. Vahrenkamp, M. Wächter, and T. Asfour, “Grasping of unknown objects using deep convolutional neural networks based on depth images,” 2018 IEEE international conference on robotics and automation (ICRA), pp. 6831–6838, Brisbane, Australia, May 21 - 25, 2018.
[19] A. Kasper, Z. Xue, and R. Dillmann, “The kit object models database: An object model database for object recognition, localization and manipulation in service robotics,” The International Journal of Robotics Research, vol. 31, no. 8, pp. 927–934, 2012.
[20] B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, and A. M. Dollar, “The ycb object and model set: Towards common benchmarks for manipulation research,” 2015 international conference on advanced robotics (ICAR), pp. 510–517, Seattle, USA, May 26 - 30, 2015.
[21] D. Yang, T. Tosun, B. Eisner, V. Isler, and D. Lee, “Robotic grasping through combined image-based grasp proposal and 3d reconstruction,” 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 6350–6356, Xian, China, May 30 - June 5, 2021.
[22] C.-H. Wang and P.-C. Lin, “Q-pointnet: Intelligent stacked-objects grasping using a rgbd sensor and a dexterous hand,” 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), pp. 601–606, Boston, USA, July 6 - 10, 2020.
[23] Y. Cheng, C. Su, Y. Jia, and N. Xi, “Data correlation approach for slippage detection in robotic manipulations using tactile sensor array,” 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2717–2722, Hamburg, Germany, September 28 - October 2, 2015.
[24] G. Tian, J. Zhou, and B. Gu, “Slipping detection and control in gripping fruits and vegetables for agricultural robot,” International Journal of Agricultural and Biological Engineering, vol. 11, no. 4, pp. 45–51, 2018.
[25] H. Zhou, J. Xiao, H. Kang, X. Wang, W. Au, and C. Chen, “Learning- based slip detection for robotic fruit grasping and manipulation under leaf interference,” Sensors, vol. 22, no. 15, p. 5483, 2022.
[26] L. Roberts, G. Singhal, and R. Kaliki, “Slip detection and grip adjustment using optical tracking in prosthetic hands,” 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 2929–2932, Boston, USA, August 30 - September 3, 2011.
[27] W. Yuan, S. Dong, and E. H. Adelson, “Gelsight: High-resolution robot tactile sensors for estimating geometry and force,” Sensors, vol. 17, no. 12, p. 2762, 2017.
[28] J. Li, S. Dong, and E. Adelson, “Slip detection with combined tactile and visual information,” 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7772–7777, Brisbane, Australia, May 21 - 25, 2018.
[29] Q.-Y. Zhou, J. Park, and V. Koltun, “Open3D: A modern library for 3D data processing,” arXiv:1801.09847, 2018.
[30] A. Dutta and A. Zisserman, “The VIA annotation software for images, audio and video,” Proceedings of the 27th ACM International Conference on Multimedia, ser. MM ’19, Nice, France, October 21 - 25, 2019. [Online]. Available: https://doi.org/10.1145/3343031.3350535
[31] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Doll’a r, and C. L. Zitnick, “Microsoft COCO: common objects in context,” CoRR, vol. abs/1405.0312, 2014.
[32] W. R. Hamilton, “Ii. on quaternions; or on a new system of imaginaries in algebra,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 25, no. 163, pp. 10–13, 1844.
[33] S. Mahendran, H. Ali, and R. Vidal, “3d pose regression using convolutional neural networks,” Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2174–2182, Venice, Italy, October 22 - 29, 2017.
[34] S. S. M. Salehi, S. Khan, D. Erdogmus, and A. Gholipour, “Real-time deep pose estimation with geodesic loss for image-to-template rigid registration,” IEEE transactions on medical imaging, vol. 38, no. 2, pp. 470–481, 2018.
[35] papabravo. Rack & Pinion Robotic Gripper Jaw. [Online]. Available: https://www.thingiverse.com/thing:2661755, accessed: 2023-04-05.
[36] Interlink Electronics. FSR 400 Data Sheet. [Online]. Available: https://cdn.sparkfun.com/datasheets/Sensors/ForceFlex/2010-10-26-DataSheet-FSR400-Layout2.pdf, accessed: 2023-05-31.
[37] X. Liu, G. Chai, H. Qu, and N. Lan, “A sensory feedback system for prosthetic hand based on evoked tactile sensation,” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2493–2496, Milan, Italy, August 25 - 29, 2015.
[38] C. Gentile, F. Cordella, C. R. Rodrigues, and L. Zollo, “Touch-and-slippage detection algorithm for prosthetic hands,” Mechatronics, vol. 70, p. 102402, 2020.
[39] F. Leone, C. Gentile, A. L. Ciancio, E. Gruppioni, A. Davalli, R. Sacchetti, E. Guglielmelli, and L. Zollo, “Simultaneous semg classification of hand/wrist gestures and forces,” Frontiers in neurorobotics, vol. 13, p. 42, 2019.
[40] R. A. Romeo, F. Cordella, L. Zollo, D. Formica, P. Saccomandi, E. Schena, G. Carpino, A. Davalli, R. Sacchetti, and E. Guglielmelli, “Development and preliminary testing of an instrumented object for force analysis during grasping,” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6720–6723, Milan, Italy, August 25 - 29, 2015.
[41] J. Flórez and A. Velasquez, “Calibration of force sensing resistors (fsr) for static and dynamic applications,” 2010 IEEE ANDESCON, pp. 1–6, 2010.
[42] E. C. Swanson, E. J. Weathersby, J. C. Cagle, and J. E. Sanders, “Evaluation of force sensing resistors for the measurement of interface pressures in lower limb prosthetics,” Journal of Biomechanical Engineering, vol. 141, no. 10, 2019.
[43] CFSensor. XGZP6847 Pressure Sensor Module. [Online]. Available: https://www.sgbotic.com/products/datasheets/sensors/ 02976-datasheet.pdf, accessed: 2023-06-16.
[44] IFL-CAMP. easy handeye. [Online]. Available: https://github.com/IFL-CAMP/easy_handeye, accessed: 2023-03-15.
[45] R. Y. Tsai, R. K. Lenz et al., “A new technique for fully autonomous and efficient 3 d robotics hand/eye calibration,” IEEE Transactions on robotics and automation, vol. 5, no. 3, pp. 345–358, 1989.
[46] I. Ali, O. Suominen, A. Gotchev, and E. R. Morales, “Methods for simultaneous robot-world-hand–eye calibration: A comparative study,” Sensors, vol. 19, no. 12, p. 2837, 2019.
[47] 江宗錡,”六軸關節型機械手臂之手眼校正研究”,國立成功大學電機工程學系碩士論文,2014年6月。
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *