|
[1] [Online]Available:https://news.ltn.com.tw/news/life/breakingnews/2413111 [Assessed 30 November 2019] [2] [Online]Available:https://www.chanto-air.com [Assessed 30 November 2019] [3] Mnyusiwalla, H., Vulliez, P., Gazeau, J. P., & Zeghloul, S. (2015). A new dexterous hand based on bio-inspired finger design for inside-hand manipulation. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 46(6), 809-817. [4] Samadikhoshkho, Z., Zareinia, K., & Janabi-Sharifi, F. (2019, May). A Brief Review on Robotic Grippers Classifications. In 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE) (pp. 1-4). IEEE. [5] Ciocarlie, M., Hicks, F. M., Holmberg, R., Hawke, J., Schlicht, M., Gee, J., ... & Bahadur, R. (2014). The Velo gripper: A versatile single-actuator design for enveloping, parallel and fingertip grasps. The International Journal of Robotics Research, 33(5), 753-767. [6] Dong, H., Asadi, E., Qiu, C., Dai, J., & Chen, I. M. (2018). Geometric design optimization of an under-actuated tendon-driven robotic gripper. Robotics and Computer-Integrated Manufacturing, 50, 80-89. [7] Gao, B., Yang, S., Jin, H., Hu, Y., Yang, X., & Zhang, J. (2016, December). Design and analysis of underactuated robotic gripper with adaptive fingers for objects grasping tasks. In 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 987-992). IEEE. [8] Biagiotti, L., Melchiorri, C., & Vassura, G. (2001, June). Control of a Three-dof Robotic Gripper for Space Applications. In Symposium on Artificial Intelligence, Robotics and Automation in Space (ISAIRAS'01), Montreal, Canada (pp. 18-22). [9] Hasegawa, H., Mizoguchi, Y., Tadakuma, K., Ming, A., Ishikawa, M., & Shimojo, M. (2010, May). Development of intelligent robot hand using proximity, contact and slip sensing. In 2010 IEEE International Conference on Robotics and Automation (pp. 777-784). IEEE. [10] Koyama, K., Murakami, K., Senoo, T., Shimojo, M., & Ishikawa, M. (2019). High-Speed, Small-Deformation Catching of Soft Objects Based on Active Vision and Proximity Sensing. IEEE Robotics and Automation Letters, 4(2), 578-585. [11] Koyama, K., Shimojo, M., Senoo, T., & Ishikawa, M. (2018). High-speed high-precision proximity sensor for detection of tilt, distance, and contact. IEEE Robotics and Automation Letters, 3(4), 3224-3231. [12] Jain, S., & Argall, B. (2016, May). Grasp detection for assistive robotic manipulation. In 2016 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2015-2021). IEEE. [13] Saxena, A., Driemeyer, J., & Ng, A. Y. (2008). Robotic grasping of novel objects using vision. The International Journal of Robotics Research, 27(2), 157-173. [14] He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969). [15] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779-788). [16] Yu, J., Weng, K., Liang, G., & Xie, G. (2013, December). A vision-based robotic grasping system using deep learning for 3D object recognition and pose estimation. In 2013 IEEE international conference on robotics and biomimetics (ROBIO) (pp. 1175-1180). IEEE. [17] Do, T. T., Pham, T., Cai, M., & Reid, I. (2018). Real-time monocular object instance 6d pose estimation. In British Machine Vision Conference (BMVC) (Vol. 1, No. 2, p. 6). [18] Mahendran, S., Ali, H., & Vidal, R. (2017). 3d pose regression using convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision Workshops (pp. 2174-2182). [19] Cheng, H., & Meng, M. Q. H. (2018, December). A grasp pose detection scheme with an End-to-End CNN regression approach. In 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO) (pp. 544-549). IEEE. [20] Nandi, G. C., Agarwal, P., Gupta, P., & Singh, A. (2018, June). Deep learning based intelligent robot grasping strategy. In 2018 IEEE 14th International Conference on Control and Automation (ICCA) (pp. 1064-1069). IEEE. [21] 吳宜儒. (2019). 基於深度學習與立體影像之機械手臂智慧夾取. 清華大學動力機械工程學系學位論文, (2019年), 1-57. [22] [Online]Available:http://www.ni.com/example/12557/en/ [Accessed 15 July 2019] [23] [Online]Available:https://www.intel.com.tw/content/www/tw/zh/architecture-and-technology/realsense-overview.html [Accessed 15 July 2019] [24] [Online]Available:https://en.wikipedia.org/wiki/Genetic_algorithm [Accessed 23 July 2020] [25] Russell, S., & Norvig, P. (2002). Artificial intelligence: a modern approach. [26] [Online]Available:https://en.wikipedia.org/wiki/Computer_vision [Accessed 8 June 2020] [27] [Online]Available:https://github.com/tzutalin/labelImg [Accessed 8 June 2020] [28] [Online]Available:https://www.coursera.org/learn/convolutional-neural-networks?specialization=deep-learning [Accessed 8 June 2020] [29] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. [30] [Online]Available:https://en.wikipedia.org/wiki/Quaternion [Accessed 22 July 2020] [31] [Online]Available:http://www.ece.northwestern.edu/local-apps/matlabhelp/toolbox/images/corr2.html [Accessed 6 July 2020]
|