帳號:guest(13.59.182.74)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):蘇立珩
作者(外文):Su, Li-Heng
論文名稱(中文):基於機器學習與影像之機械手臂抓取
論文名稱(外文):Robot Arm Grasping Based on Machine Learning and Images
指導教授(中文):葉廷仁
指導教授(外文):Yeh, Ting-Jen
口試委員(中文):顏炳郎
陳榮順
口試委員(外文):Yen, Ping-Lang
Chen, Rong-Shun
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:105033535
出版年(民國):107
畢業學年度:106
語文別:中文
論文頁數:58
中文關鍵詞:機器學習深度Q網絡機械手臂欠制動自適應性夾爪
外文關鍵詞:machine learningDeep Q Networkrobotic armunderactuated adaptive gripper
相關次數:
  • 推薦推薦:0
  • 點閱點閱:278
  • 評分評分:*****
  • 下載下載:82
  • 收藏收藏:0
本研究以七自由度機械手結合影像與機器學習應用於抓取。傳統上需利用多組方程式描述抓取的動態與模型,進而尋找適當夾取點,然而本研究利用機器學習來找尋最佳夾取點,免除數學模擬與實務上的差距。本系統以機器學習的方式,辨識物品並找出目標物,接著建立目標物在三維空間的位置與姿態之模型,再以物體三維空間資訊做為資料,以強化學習的方式,學習機械手臂最佳的夾取位置與姿態。而本研究所用之機器學習分為三大部分,分別為物體辨識、形心與姿態估測,以及最佳抓取點系統,其中前兩部分採用卷積神經網路(Convolutional Neural Network)做為學習之架構,而最佳抓取點系統使用深度Q網絡(Deep Q Network)決定抓取策略。在機械手臂的部分,本系統使用六自由度的機械手臂加上一自由度的欠致動自適應性夾爪(underactuated adaptive gripper),此自適應性夾爪可針對各式外型物品自動轉換成平行式與張角式夾爪,進行彈性化夾取。
This study uses a 7-DOF robotic arm combined with image and machine learning for grasping. Traditionally, multiple sets of equations are needed to describe the dynamics and models of the grabs, and then to find the appropriate gripping points. This system uses machine learning to find the best gripping points, eliminate the gap between mathematical simulation and practice. First, we use machine learning to identify the object and find the target. Second, the system builds the model of the centroid and posture of the target in three-dimensional space. Finally, the system uses the three-dimensional information of the target as The data to reinforcement learning and learn the best gripping points. The machine learning used in this study is divided into three parts, object identification, centroid and posture estimation, and the best grab point system. The first two parts use the Convolutional Neural Network as the learning framework, and the best grab point system uses the Deep Q Network to determine the grasping strategy. In the part of the hardware, the system uses a 6-DOF robotic arm plus 1-DOF underactuated adaptive gripper, which automatically converts parallel and angled jaws for various types of items for elastic clamping.
摘要----------------------------------------i
Abstract-----------------------------------ii
致謝---------------------------------------iii
目錄----------------------------------------iv
圖目錄-------------------------------------vii
表目錄---------------------------------------x
第一章 緒論----------------------------------1
1.1 研究動機與目的----------------------------1
1.2 文獻回顧---------------------------------3
1.3 論文簡介---------------------------------6
第二章 硬體設備-------------------------------8
2.1 機械手臂---------------------------------8
2.2 自適應性夾爪-----------------------------10
2.2.1 張角式夾爪-----------------------------11
2.2.2 平行式夾爪-----------------------------12
2.2.3 自適應性夾爪夾取之控制------------------18
第三章 運動學--------------------------------21
3.1 D-H參數法--------------------------------21
3.2 順向運動學-------------------------------23
3.3 逆向運動學-------------------------------24
3.3.1 關節一之逆向運動學----------------------24
3.3.2 關節二之逆向運動學----------------------26
3.3.3 關節三之逆向運動學----------------------27
3.3.4 關節四之逆向運動學----------------------29
3.3.5 關節五之逆向運動學----------------------30
3.3.6 關節六之逆向運動學----------------------31
3.4 模擬結果---------------------------------32
3.4.1 順向運動學模擬結果----------------------32
3.4.2 逆向運動學模擬結果----------------------34
3.5 機械手臂運動控制--------------------------35
第四章 機器學習-------------------------------36
4.1 機器學習介紹------------------------------36
4.1.1 受監督學習法----------------------------37
4.1.2 強化學習--------------------------------37
4.1.3 卷積神經網絡----------------------------39
4.1.4 優化器(Optimizer)----------------------41
4.1.5 批次標準化(Batch Normalization)---------44
4.1.6 物體辨識與框選--------------------------46
4.1.7 形心與姿態估測--------------------------48
4.1.8 最佳抓取點系統--------------------------51
第五章 結論與未來工作--------------------------55
5.1 結論--------------------------------------55
5.2 未來工作----------------------------------56
參考文獻--------------------------------------57
[1] "Fisrt industrial robot at GM" Available:
https://ifr.org/robot-history
[2] "Pepper," Aldebaran Robotics, SoftBank Robotics Corp., [online] Available:https://www.ald.softbankrobotics.com/en/robots/pepper.
[3] "TALON," Qinetiq Group plc, [online] Available: https://www.qinetiq-na.com/products/unmanned-systems/talon.
[4] Ferrari, C., & Canny, J. (1992, May). Planning optimal grasps. In Robotics and Automation, 1992. Proceedings., 1992 IEEE International Conference on (pp. 2290-2295). IEEE.
[5] Kaneko, M. (1990). A realization of stable grasp based on virtual stiffness model by robot fingers. In Proceedings of IEEE International Workshop on Advanced Motion Control, Yokohama(Vol. 156).
[6] Harada, K., Tsuji, T., Uto, S., Yamanobe, N., Nagata, K., & Kitagaki, K. (2014, May). Stability of soft-finger grasp under gravity. In Robotics and Automation (ICRA), 2014 IEEE International Conference on (pp. 883-888). IEEE.
[7] Goldfeder, C., Ciocarlie, M., Dang, H., & Allen, P. K. (2009, May). The columbia grasp database. In Robotics and Automation, 2009. ICRA'09. IEEE International Conference on (pp. 1710-1716). IEEE.
[8] Kootstra, G., Popović, M., Jørgensen, J. A., Kragic, D., Petersen, H. G., & Krüger, N. (2012). VisGraB: A benchmark for vision-based grasping. Paladyn, Journal of Behavioral Robotics, 3(2), 54-62.
[9] Weisz, J., & Allen, P. K. (2012, May). Pose error robust grasping from contact wrench space metrics. In Robotics and Automation (ICRA), 2012 IEEE International Conference on (pp. 557-562). IEEE.
[10] Rodriguez, A., Mason, M. T., & Ferry, S. (2012). From caging to grasping. The International Journal of Robotics Research, 31(7), 886-900.
[11] Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Bohg, J., Asfour, T., & Schaal, S. (2014). Learning of grasp selection based on shape-templates. Autonomous Robots, 36(1-2), 51-65.
[12] Lenz, I., Lee, H., & Saxena, A. (2015). Deep learning for detecting robotic grasps. The International Journal of Robotics Research, 34(4-5), 705-724.
[13] Kragic, D., & Christensen, H. I. (2002). Survey on visual servoing for manipulation. Computational Vision and Active Perception Laboratory, Fiskartorpsv, 15, 2002.
[14] Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine learning, 8(3-4), 279-292.
[15] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
[16] Zhu, T., Yang, H., & Zhang, W. (2016, August). A Spherical Self-Adaptive Gripper with shrinking of an elastic membrane. In Advanced Robotics and Mechatronics (ICARM), International Conference on (pp. 512-517). IEEE.
[17] Gao, B., Yang, S., Jin, H., Hu, Y., Yang, X., & Zhang, J. (2016, December). Design and analysis of underactuated robotic gripper with adaptive fingers for objects grasping tasks. In Robotics and Biomimetics (ROBIO), 2016 IEEE International Conference on (pp. 987-992). IEEE.
[18] "Yale OpenHand Project" Available:
https://www.eng.yale.edu/grablab/openhand/
[19] "Angular gripper" vailable:
http://www.rgk-fa.com/motion.asp?siteid=100490&lgid=1&men-
uid=9407&prodid=133635&cat=11767
[20] 江修, & 黃偉峰. (2006). 六軸機械臂之控制理論分析與應用.
[21] Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.
[22] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).


 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *