|
[1] Omron Mobile Manipulator. [Online]. Available: https://industrial.omron.eu/en/solutions/product-solutions/omron-mobile-manipulator-solution, accessed: 2022-11-01. [2] Moxi Mobile Manipulator. [Online]. Available: https://www.diligentrobots.com/moxi, accessed: 2022-11-01. [3] Frankfurt. (2022) World Robotics 2022–Service Robots report. [Online]. Available: https://ifr.org/ifr-press-releases/, accessed: 2022-11-01. [4] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,”Advances in neural information processing systems, vol. 28, 2015. [5] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788, Las Vegas, Nevada, June 26 - July 1, 2016. [6] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020. [7] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 9, pp. 1904–1916, 2015. [8] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp.8759–8768, Salt Lake City, Utah, June 18 - 22, 2018. [9] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018. [10] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” Proceedings of the IEEE international conference on computer vision, pp. 2961–2969, Venice, Italy, October 22 - 29, 2017. [11] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440, Boston, USA, June 7 - 15, 2015. [12] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–660, Honolulu, USA, July 21 - 26, 2017. [13] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” Advances in neural information processing systems, vol. 30, 2017. [14] A. Mousavian, C. Eppner, and D. Fox, “6-dof graspnet: Variational grasp generation for object manipulation,” Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2901– 2910, Seoul, Korea, October 27 - November 2, 2019. [15] A. ten Pas, M. Gualtieri, K. Saenko, and R. Platt, “Grasp pose detection in point clouds,” The International Journal of Robotics Research, vol. 36, no. 13-14, pp. 1455–1473, 2017. [16] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, “Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics,” arXiv preprint arXiv:1703.09312, 2017. [17] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen, “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” The International journal of robotics research, vol. 37, no. 4-5, pp. 421–436, 2018. [18] P. Schmidt, N. Vahrenkamp, M. Wächter, and T. Asfour, “Grasping of unknown objects using deep convolutional neural networks based on depth images,” 2018 IEEE international conference on robotics and automation (ICRA), pp. 6831–6838, Brisbane, Australia, May 21 - 25, 2018. [19] A. Kasper, Z. Xue, and R. Dillmann, “The kit object models database: An object model database for object recognition, localization and manipulation in service robotics,” The International Journal of Robotics Research, vol. 31, no. 8, pp. 927–934, 2012. [20] B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, and A. M. Dollar, “The ycb object and model set: Towards common benchmarks for manipulation research,” 2015 international conference on advanced robotics (ICAR), pp. 510–517, Seattle, USA, May 26 - 30, 2015. [21] D. Yang, T. Tosun, B. Eisner, V. Isler, and D. Lee, “Robotic grasping through combined image-based grasp proposal and 3d reconstruction,” 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 6350–6356, Xian, China, May 30 - June 5, 2021. [22] C.-H. Wang and P.-C. Lin, “Q-pointnet: Intelligent stacked-objects grasping using a rgbd sensor and a dexterous hand,” 2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), pp. 601–606, Boston, USA, July 6 - 10, 2020. [23] Y. Cheng, C. Su, Y. Jia, and N. Xi, “Data correlation approach for slippage detection in robotic manipulations using tactile sensor array,” 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2717–2722, Hamburg, Germany, September 28 - October 2, 2015. [24] G. Tian, J. Zhou, and B. Gu, “Slipping detection and control in gripping fruits and vegetables for agricultural robot,” International Journal of Agricultural and Biological Engineering, vol. 11, no. 4, pp. 45–51, 2018. [25] H. Zhou, J. Xiao, H. Kang, X. Wang, W. Au, and C. Chen, “Learning- based slip detection for robotic fruit grasping and manipulation under leaf interference,” Sensors, vol. 22, no. 15, p. 5483, 2022. [26] L. Roberts, G. Singhal, and R. Kaliki, “Slip detection and grip adjustment using optical tracking in prosthetic hands,” 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 2929–2932, Boston, USA, August 30 - September 3, 2011. [27] W. Yuan, S. Dong, and E. H. Adelson, “Gelsight: High-resolution robot tactile sensors for estimating geometry and force,” Sensors, vol. 17, no. 12, p. 2762, 2017. [28] J. Li, S. Dong, and E. Adelson, “Slip detection with combined tactile and visual information,” 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7772–7777, Brisbane, Australia, May 21 - 25, 2018. [29] Q.-Y. Zhou, J. Park, and V. Koltun, “Open3D: A modern library for 3D data processing,” arXiv:1801.09847, 2018. [30] A. Dutta and A. Zisserman, “The VIA annotation software for images, audio and video,” Proceedings of the 27th ACM International Conference on Multimedia, ser. MM ’19, Nice, France, October 21 - 25, 2019. [Online]. Available: https://doi.org/10.1145/3343031.3350535 [31] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Doll’a r, and C. L. Zitnick, “Microsoft COCO: common objects in context,” CoRR, vol. abs/1405.0312, 2014. [32] W. R. Hamilton, “Ii. on quaternions; or on a new system of imaginaries in algebra,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, vol. 25, no. 163, pp. 10–13, 1844. [33] S. Mahendran, H. Ali, and R. Vidal, “3d pose regression using convolutional neural networks,” Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2174–2182, Venice, Italy, October 22 - 29, 2017. [34] S. S. M. Salehi, S. Khan, D. Erdogmus, and A. Gholipour, “Real-time deep pose estimation with geodesic loss for image-to-template rigid registration,” IEEE transactions on medical imaging, vol. 38, no. 2, pp. 470–481, 2018. [35] papabravo. Rack & Pinion Robotic Gripper Jaw. [Online]. Available: https://www.thingiverse.com/thing:2661755, accessed: 2023-04-05. [36] Interlink Electronics. FSR 400 Data Sheet. [Online]. Available: https://cdn.sparkfun.com/datasheets/Sensors/ForceFlex/2010-10-26-DataSheet-FSR400-Layout2.pdf, accessed: 2023-05-31. [37] X. Liu, G. Chai, H. Qu, and N. Lan, “A sensory feedback system for prosthetic hand based on evoked tactile sensation,” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2493–2496, Milan, Italy, August 25 - 29, 2015. [38] C. Gentile, F. Cordella, C. R. Rodrigues, and L. Zollo, “Touch-and-slippage detection algorithm for prosthetic hands,” Mechatronics, vol. 70, p. 102402, 2020. [39] F. Leone, C. Gentile, A. L. Ciancio, E. Gruppioni, A. Davalli, R. Sacchetti, E. Guglielmelli, and L. Zollo, “Simultaneous semg classification of hand/wrist gestures and forces,” Frontiers in neurorobotics, vol. 13, p. 42, 2019. [40] R. A. Romeo, F. Cordella, L. Zollo, D. Formica, P. Saccomandi, E. Schena, G. Carpino, A. Davalli, R. Sacchetti, and E. Guglielmelli, “Development and preliminary testing of an instrumented object for force analysis during grasping,” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 6720–6723, Milan, Italy, August 25 - 29, 2015. [41] J. Flórez and A. Velasquez, “Calibration of force sensing resistors (fsr) for static and dynamic applications,” 2010 IEEE ANDESCON, pp. 1–6, 2010. [42] E. C. Swanson, E. J. Weathersby, J. C. Cagle, and J. E. Sanders, “Evaluation of force sensing resistors for the measurement of interface pressures in lower limb prosthetics,” Journal of Biomechanical Engineering, vol. 141, no. 10, 2019. [43] CFSensor. XGZP6847 Pressure Sensor Module. [Online]. Available: https://www.sgbotic.com/products/datasheets/sensors/ 02976-datasheet.pdf, accessed: 2023-06-16. [44] IFL-CAMP. easy handeye. [Online]. Available: https://github.com/IFL-CAMP/easy_handeye, accessed: 2023-03-15. [45] R. Y. Tsai, R. K. Lenz et al., “A new technique for fully autonomous and efficient 3 d robotics hand/eye calibration,” IEEE Transactions on robotics and automation, vol. 5, no. 3, pp. 345–358, 1989. [46] I. Ali, O. Suominen, A. Gotchev, and E. R. Morales, “Methods for simultaneous robot-world-hand–eye calibration: A comparative study,” Sensors, vol. 19, no. 12, p. 2837, 2019. [47] 江宗錡,”六軸關節型機械手臂之手眼校正研究”,國立成功大學電機工程學系碩士論文,2014年6月。 |