帳號:guest(18.191.235.71)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):陳瑋凡
作者(外文):Chen, Wei-Fan
論文名稱(中文):自適應夾爪設計、控制及整合深度學習之智慧抓取應用
論文名稱(外文):Design and Control of an Adaptive Gripper with Application to Intelligent Grasping Using Deep Learning
指導教授(中文):葉廷仁
指導教授(外文):Yeh, Ting-Jen
口試委員(中文):顏炳郎
劉承賢
口試委員(外文):Yan, Bing-Lang
Liu, Cheng-Hsien
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:107033535
出版年(民國):109
畢業學年度:108
語文別:中文
論文頁數:80
中文關鍵詞:自適應夾爪欠制動肌腱驅動深度學習形心估測姿態估測
外文關鍵詞:Adaptive gripperUnder-actuatedTendon-drivenDeep learningCentroid estimationOrientation estimation
相關次數:
  • 推薦推薦:0
  • 點閱點閱:2975
  • 評分評分:*****
  • 下載下載:93
  • 收藏收藏:0
本研究建立了一用以執行智慧抓取任務之欠致動自適應夾爪,其可根據夾取時與物體之幾何關係,進行平行式或包覆式抓取。為最大化夾爪的適應能力,本研究建立抓取運動學模型,並透過最佳化來尋找最佳設計參數。此外,爪面上配置有接近與力量感測器,使夾爪能快速且安全地抓取脆弱物體。此夾爪安裝於一具六自由度之機器手臂上,作為執行智慧抓取任務之終端效應器。智慧抓取係基於三個以RGBD影像作為輸入的深度學習神經網路,分別用以偵測物體、估測物體形心位置及其姿態。最後透過實驗,驗證本研究所設計之夾爪與神經網路,可以成功用以抓取隨機放置之各式物體。
This thesis develops an under-actuated adaptive gripper for intelligent grasping. The gripper can perform either parallel or enveloping grasp depending on the geometry of the object to be grasped. To maximize the adaptability of the gripper, kinematic models for grasping are established and optimizations are performed on the models to seek the optimal design parameters. The gripper is equipped with a proximity sensor and a force sensor, which allows it to quickly and safely grasp delicate objects. The gripper is attached to a 6 DOF robot arm as an end effector to perform intelligent grasping. The intelligent grasping is based on three deep learning networks which adopt RGBD stereo images as the inputs. The three networks are respectively to detect the object, estimate its centroid location, and determine its orientation. Experiments verify that the developed gripper and neural networks can successful grasp various randomly placed objects.
摘要 i
Abstract ii
致謝 iii
目錄 iv
圖目錄 vii
表目錄 xi
第一章 緒論 1
1.1 研究動機與目的 1
1.2 文獻回顧 4
1.3 論文簡介 9
第二章 硬體設備介紹 10
2.1 自適應夾爪 10
2.2 機器手臂 11
2.3 RealSense攝影機 13
2.4 相機制動機構 14
第三章 自適應夾爪設計 15
3.1 夾爪架構 15
3.2 姿態限制條件 20
3.3 平行閉合原理 22
3.4 適應包覆原理 24
3.5 夾持穩定性   26
3.6 路徑點參數設計 30
3.6.1 問題描述 30
3.6.2 基因演算法 33
3.6.3 最佳化結果 35
3.7 滑輪參數設計 36
第四章 夾取感測與控制 41
4.1 硬體配置 41
4.2 抓取控制 46
第五章 電腦視覺 51
5.1 實驗架構 52
5.2 資料建立 54
5.3 物件偵測模型 57
5.4 深度估測模型 60
5.5 姿態估測模型 65
第六章 結論與未來工作 74
6.1 結論 74
6.2 未來工作 76
參考資料 78

[1] [Online]Available:https://news.ltn.com.tw/news/life/breakingnews/2413111 [Assessed 30 November 2019]
[2] [Online]Available:https://www.chanto-air.com [Assessed 30 November 2019]
[3] Mnyusiwalla, H., Vulliez, P., Gazeau, J. P., & Zeghloul, S. (2015). A new dexterous hand based on bio-inspired finger design for inside-hand manipulation. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 46(6), 809-817.
[4] Samadikhoshkho, Z., Zareinia, K., & Janabi-Sharifi, F. (2019, May). A Brief Review on Robotic Grippers Classifications. In 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE) (pp. 1-4). IEEE.
[5] Ciocarlie, M., Hicks, F. M., Holmberg, R., Hawke, J., Schlicht, M., Gee, J., ... & Bahadur, R. (2014). The Velo gripper: A versatile single-actuator design for enveloping, parallel and fingertip grasps. The International Journal of Robotics Research, 33(5), 753-767.
[6] Dong, H., Asadi, E., Qiu, C., Dai, J., & Chen, I. M. (2018). Geometric design optimization of an under-actuated tendon-driven robotic gripper. Robotics and Computer-Integrated Manufacturing, 50, 80-89.
[7] Gao, B., Yang, S., Jin, H., Hu, Y., Yang, X., & Zhang, J. (2016, December). Design and analysis of underactuated robotic gripper with adaptive fingers for objects grasping tasks. In 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 987-992). IEEE.
[8] Biagiotti, L., Melchiorri, C., & Vassura, G. (2001, June). Control of a Three-dof Robotic Gripper for Space Applications. In Symposium on Artificial Intelligence, Robotics and Automation in Space (ISAIRAS'01), Montreal, Canada (pp. 18-22).
[9] Hasegawa, H., Mizoguchi, Y., Tadakuma, K., Ming, A., Ishikawa, M., & Shimojo, M. (2010, May). Development of intelligent robot hand using proximity, contact and slip sensing. In 2010 IEEE International Conference on Robotics and Automation (pp. 777-784). IEEE.
[10] Koyama, K., Murakami, K., Senoo, T., Shimojo, M., & Ishikawa, M. (2019). High-Speed, Small-Deformation Catching of Soft Objects Based on Active Vision and Proximity Sensing. IEEE Robotics and Automation Letters, 4(2), 578-585.
[11] Koyama, K., Shimojo, M., Senoo, T., & Ishikawa, M. (2018). High-speed high-precision proximity sensor for detection of tilt, distance, and contact. IEEE Robotics and Automation Letters, 3(4), 3224-3231.
[12] Jain, S., & Argall, B. (2016, May). Grasp detection for assistive robotic manipulation. In 2016 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2015-2021). IEEE.
[13] Saxena, A., Driemeyer, J., & Ng, A. Y. (2008). Robotic grasping of novel objects using vision. The International Journal of Robotics Research, 27(2), 157-173.
[14] He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969).
[15] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788).
[16] Yu, J., Weng, K., Liang, G., & Xie, G. (2013, December). A vision-based robotic grasping system using deep learning for 3D object recognition and pose estimation. In 2013 IEEE international conference on robotics and biomimetics (ROBIO) (pp. 1175-1180). IEEE.
[17] Do, T. T., Pham, T., Cai, M., & Reid, I. (2018). Real-time monocular object instance 6d pose estimation. In British Machine Vision Conference (BMVC) (Vol. 1, No. 2, p. 6).
[18] Mahendran, S., Ali, H., & Vidal, R. (2017). 3d pose regression using convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 2174-2182).
[19] Cheng, H., & Meng, M. Q. H. (2018, December). A grasp pose detection scheme with an End-to-End CNN regression approach. In 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 544-549). IEEE.
[20] Nandi, G. C., Agarwal, P., Gupta, P., & Singh, A. (2018, June). Deep learning based intelligent robot grasping strategy. In 2018 IEEE 14th International Conference on Control and Automation (ICCA) (pp. 1064-1069). IEEE.
[21] 吳宜儒. (2019). 基於深度學習與立體影像之機械手臂智慧夾取. 清華大學動力機械工程學系學位論文, (2019年), 1-57.
[22] [Online]Available:http://www.ni.com/example/12557/en/ [Accessed 15 July 2019]
[23] [Online]Available:https://www.intel.com.tw/content/www/tw/zh/architecture-and-technology/realsense-overview.html [Accessed 15 July 2019]
[24] [Online]Available:https://en.wikipedia.org/wiki/Genetic_algorithm [Accessed 23 July 2020]
[25] Russell, S., & Norvig, P. (2002). Artificial intelligence: a modern approach.
[26] [Online]Available:https://en.wikipedia.org/wiki/Computer_vision [Accessed 8 June 2020]
[27] [Online]Available:https://github.com/tzutalin/labelImg [Accessed 8 June 2020]
[28] [Online]Available:https://www.coursera.org/learn/convolutional-neural-networks?specialization=deep-learning [Accessed 8 June 2020]
[29] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[30] [Online]Available:https://en.wikipedia.org/wiki/Quaternion [Accessed 22 July 2020]
[31] [Online]Available:http://www.ece.northwestern.edu/local-apps/matlabhelp/toolbox/images/corr2.html [Accessed 6 July 2020]
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *