帳號:guest(216.73.216.146)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):鄭曉鴻
作者(外文):Cheng, Hsiao-Hung
論文名稱(中文):基於深度學習方式之三維迭代式最近點定位誤差之估計
論文名稱(外文):Localization Errors Estimation of 3D Iterative Closest Point Based on Deep Learning Method
指導教授(中文):王培仁
指導教授(外文):Wang, Pei-Jen
口試委員(中文):張國文
劉晉良
陳鴻文
口試委員(外文):Chang, Kuo-Wen
Liu, Jinn-Liang
Chen, Hung-Wen
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:109033538
出版年(民國):111
畢業學年度:111
語文別:中文
論文頁數:66
中文關鍵詞:定位技術光達定位迭代式最近點誤差估計
外文關鍵詞:LocalizationLight Detection and RangingIterative Closest Point AlgorithmError Estimation
相關次數:
  • 推薦推薦:0
  • 點閱點閱:74
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
建構自主移動人或自駕車的無人化自動導航功能時,自主控制器之定位能力是最重要的核心技術,根據實務經驗得知,自主控制器之定位精度會因嚴苛環境條件而急遽降低;舉例如,全球衛星定位在隧道內失去訊號、光達在市區車陣中失效等,故必須借助感測器融合技術,進行結合多種感測器資訊,於不同環境條件進行定位位置強化。因光達為習知之基本位置感測器,如將光達位置點雲與迭代式最近點定位數據引用原始輸入資料,再進行估計光定位數據之誤差,能夠改善光達失效時之定位盲點。
本論文採用場景流之FlowNet3D框架進行研究,提取定位算法應用場域的特徵,進而完成定位誤差之估計。因起始預測值誤差隱含於資料集,故本論文建構之預測模型預期能估計起始猜測及場景特徵兩關鍵因子造成之誤差,再以NNE指標評價估測結果,並與相關文獻之研究結果比較,確認此架構於部分場景下確為較優良。所有比對過程可透過視覺化來呈現單一幀及每一幀場景下之預測誤差數據,用以驗證本論文提出方法實用之可行性。
In the mobile robots and autonomous vehicles, the automatic navigation function in the route controller is one of the most important technology. However, the accuracy of localization will decrease when the autonomous driving environment becomes more severe and needs higher precision. The vehicles under guidance loss GPS signals in the tunnel or surrounded by other vehicles in the urban area will generally run into the localization system failure. Therefore, it is necessary to use sensor fusion feature in localization system. Since Lidar is common in positioning and localizing today, Lidar localization covariance estimation is the objective of this thesis study.
In the thesis, the adaptation of FlowNet3D model can illustrate the scene flow in 3D point clouds; and, extraction of the environment features plus autonomous learning under the uncertainties from the initial guess can also be executed. In the end, predictions of the error vectors from the ICP localization results are compared with the real cases. Finally, the study employs NNE index to evaluate the performance of predicted results in various environments with visualization in the predicted error range for verification of the experimental results.
摘要 II
ABSTRACT III
致謝 IV
目錄 V
圖目錄 VIII
符號文字對照表 X
第一章 序論 1
1-1 研究背景 1
1-2 研究動機與目的 2
1-3 文獻回顧 4
1-3-1 迭代式最近點算法 4
1-3-2 迭代式最近點誤差估計 5
1-3-3 場景流 6
1-3-4 點雲的深度學習 7
第二章 基礎理論介紹 10
2-1 前言 10
2-2 位姿及位姿不確定性 10
2-3 迭代式最近點法理論 11
2-3-1 計算流程 11
2-3-2 結果誤差來源 12
2-3-3 數據初始化 13
2-3-4 相對位置估計 14
2-4 迭代式最近點誤差估計 14
2-4-1 點視為地標方法 14
2-4-2 黑盒子方法 15
2-4-3 暴力法 16
2-4-4 資料驅動法 16
2-5 三維流動網架構 17
2-5-1 階層式點雲特徵學習 17
2-5-2 使用場景嵌入層的點混合模塊 18
2-5-3 萃取場景流 19
2-5-4 類神經網路架構 20
第三章 實驗方法與指標 24
3-1 前言 24
3-2 點雲資料集 24
3-3 迭代式最近點法配置與測試 24
3-4 三維流動網配置 25
3-4-1 架構及調整 25
3-4-2 訓練集與測試集 26
3-5 評估指標 27
3-5-1 常規化正值誤差 28
3-5-2 相對熵 28
3-5-3 馬氏距離 28
第四章 實驗結果與分析 34
4-1 前言 34
4-2 迭代式最近點誤差分佈 34
4-2-1 初始預設值的影響 35
4-2-2 環境結構化程度的影響 35
4-3 迭代式最近點誤差預測結果與分析 36
4-3-1 起始預設值因子影響 36
4-3-2 整體環境預測結果 37
4-3-3 環境因素 38
4-4 實驗結果討論 39
4-4-1 起始預設值實驗 39
4-4-2 各種環境實驗 39
4-4-3 實驗結論 41
4-5 實驗視覺化驗證 41
4-5-1 單一幀下的預測結果 41
4-5-2 每一幀下的預測結果 42
第五章 結論與未來工作 58
5-1 結論 58
5-2 未來工作 59
參考文獻 62
[1] X. Liu, C. R. Qi, and L. J. Guibas (2019), “FlowNet3D: Learning Scene Flow in 3D Point Clouds,” Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), pp. 529–537.
[2] F. Pomerleau, M. Liu, F. Colas and R. Siegwart (2012), “Challenging Data Sets for Point Cloud Registration Algorithms,” Int. J. Robotics Research, Vol. 31, No. 14, pp. 1705–1711.
[3] O. Bengtsson and A. J. Baerveldt (2003), “Robot Localization Based on Scan-matching — Estimating the Covariance Matrix for the IDC Algorithm,” Robotics and Autonomous Systems, Vol. 44, pp. 29–40.
[4] O. Bengtsson (2006), “Robust Self-Localization of Mobile Robots in Dynamic Environments Using Scan Matching Algorithms”, PhD. thesis, Dept. of Computer Science and Engineering, Chalmers Univ. of Tech., Göteborg, Sweden, ISBN 91-7291-744-X.
[5] T.M. Iversen, A. G. Buch, D. Kraft (2017), “Prediction of ICP Pose Uncertainties Using Monte Carlo Simulation with Synthetic Depth Images,” In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 4640–4647, IEEE.
[6] A. Censi (2007), “An Accurate Closed-form Estimate of ICP’s Covariance,” In Proc. of IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 3167–3172, IEEE.
[7] M. Brossard, S. Bonnabel, and A. Barrau (2020), “A New Approach to 3D ICP Covariance Estimation,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 744–751.
[8] S. M. Prakhya, L. Bingbing, Y. Rui, and W. Lin (2015), “A Closed-form Estimate of 3D ICP Covariance,” In 14th IAPR Int. Conf. on Machine Vision Applications (MVA), pp. 526–529, IEEE.
[9] N. Gelfand, L. Ikemoto, S. Rusinkiewicz, and M. Levoy (2003), “Geometrically Stable Sampling for the ICP Algorithm,” In 4th Int. Conf. on 3-D Digital Imaging and Modeling. Proc., pp. 260–267, IEEE.
[10] S. Bonnabel, M. Barczyk, and F. Goulette (2016), “On the Covariance of ICP-based Scan-matching Techniques,” American Control Conference (ACC), pp. 5498–5503, IEEE.
[11] D. Landry, F. Pomerleau, and P. Giguere (2019), “CELLO-3D: Estimating the Covariance of ICP in the Real World,” Int. Conf. on Robotics and Automation (ICRA), pp. 8190–8196, IEEE.
[12] A. D. Maio and S. Lacroix (2022), “Deep Bayesian ICP Covariance Estimation,”arXiv Preprint arXiv: 2202.11607.
[13] Ł. Marchel, C. Specht, and M. Specht (2020), “Testing the Accuracy of the Modified ICP Algorithm with Multimodal Weighting Factors,” Energies, 13(22), 5939.
[14] J. Nieto, T. Bailey, and E. Nebot (2006), “Scan-SLAM: Combining EKF-SLAM and Scan Correlation,” In Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany; pp. 167–178.
[15] J. Zhang and S. Singh (2014), “LOAM: Lidar Odometry and Mapping in Real-time,” In Proc. of the Robotics: Science and Systems X Conf.
[16] P. Besl and N.D. McKay (1992), “A Method for Registration of 3-D Shapes,” IEEE Trans. Pattern Anal. Mach. Intell. , 14, 239–256.
[17] E. Ezra, M. Sharir, and A. Efrat (2006), “On the ICP algorithm,” In Proc. of the Twenty-Second Annual Symposium on Computational Geometry—SCG 06; ACM Press: New York, NY, USA.
[18] S. Baek and Y. Gil (2019), “Human Pose Estimation Using Articulated ICP,” In Proc. of the 2nd Int. Conf. on Control and Robot Technology.
[19] J. Yang, H. Li, D. Campbell, and Y. Jia (2016), “Go-ICP: A Globally Optimal Solution to 3D ICP Point-set Registration,” arXiv, 38, 2241–2254, arXiv:1605.03344.
[20] L. Payá, O. R. García, and H. J. Araújo (2022), “Real-Time Lidar Odometry and Mapping with Loop Closure,” Sensors, 22, 4373.
[21] Z. Dong, F. Liang, B. Yang, Y. Xu, Y. Zang, J. Li, Y. Wang, W. Dai, H. Fan, J. Hyyppä, and U. Stilla (2020), “Registration of Large-scale Terrestrial Laser Scanner Point Clouds: A Review and Benchmark,” ISPRS J. Photogramm. Remote Sens. 163, 327–342.
[22] A. Censi (2008), “An ICP Variant Using a Point-to-Line Metric,” Proc. IEEE International Conference on Robotics and Automation, pp. 19–25.
[23] J. Serafin and G. Grisetti (2015), “NICP: Dense Normal Based Point Cloud Registration,” In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 742–749.
[24] J.E. Deschaud (2018), “IMLS-SLAM: Scan-to-model Matching Based on 3D Data,” In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 2480–2485.
[25] S. Vedula, S. Baker, P. Rander, R. Collins, and T. Kanade (1999), “Three-dimensional Scene Flow,” In Proc. of the Int. Conf. on Computer Vision, pages 722–729.
[26] A. Wedel, T. Brox, T. Vaudrey, C. Rabe, U. Franke, and D. Cremers (2010), “Stereoscopic Scene Flow Computation for 3D Motion Understanding,” Int. Journal of Computer Vision, 95(1):29–51.
[27] C. Vogel, K. Schindler, and S. Roth (2011), “3D Scene Flow Estimation with a Rigid Motion Prior,” In Proc. of the Int. Conf. on Computer Vision, pp. 1291–1298.
[28] M. Menze and A. Geiger (2015), “Object Scene Flow for Autonomous Vehicles,” In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 3061-3070.
[29] C. R. Qi, H. Su, K. Mo, and L. J. Guibas (2017), “Pointnet: Deep Learning on Point Sets for 3D Classification and Segmentation,” In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 652–660.
[30] C. R. Qi, L. Yi, H. Su, and L. J. Guibas (2017), “Pointnet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space,” Advances in Neural Information Processing Systems (NeurIPS), vol. 30.
[31] H. Thomas, C. R. Qi, J.-E. Deschaud, B. Marcotegui, F. Goulette, and L. J. Guibas (2019), “Kpconv: Flexible and Deformable Convolution for Point Clouds,” In Proc. of the IEEE/CVF Int. Conf. on Computer Vision (ICCV), pp. 6411–6420.
[32] C. R. Qi, O. Litany, K. He, and L. J. Guibas (2019), “Deep Hough Voting for 3D Object Detection in Point Clouds,” In Proc. of the IEEE/CVF Int. Conf. on Computer Vision, pp. 9277–9286.
[33] Y. Cho, G. Kim, and A. Kim (2019), “Deeplo: Geometry-aware Deep Lidar Odometry,” arXiv preprint arXiv:1902.10562.
[34] Z. Li and N. Wang (2020), “Dmlo: Deep matching lidar odometry,” In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 6010–6017, IEEE.
[35] Q. Li, S. Chen, C. Wang, X. Li, C. Wen, M. Cheng, and J. Li (2019), “Lo-net: Deep Real-time Lidar Odometry,” In Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, pp. 8473–8482.
[36] F. Lu and E. Milios (1997), “Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans,” Journal of Intelligent Robotics Systems, vol. 18, no. 3, pp. 249–275.
[37] S. Pfister, K. Kriechbaum, S. Roumeliotis, and J. Burdick (2002), “Weighted Range Sensor Matching Algorithms for Mobile Robot Displacement Estimation,” In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA).
[38] J. Minguez, F. Lamiraux, and L. Montesano (2006), “Metric-based Scan Matching Algorithms for Mobile Robot Displacement Estimation,” IEEE Transactions on Robotics.
[39] F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat (2013), “Comparing ICP Variants on Real-World Data Sets,” Autonomous Robots, vol. 34, no. 3, pp. 133–148.
[40] F. Pomerleau, A. Breitenmoser, M. Liu, F. Colas, and R. Siegwart (2012), “Noise Characterization of Depth Sensors for Surface Inspections,” In 2nd Int. Conf. on Applied Robotics for the Power Industry (CARPI), pp. 16–21, IEEE.
[41] R. Dube, A. Gawel, H. Sommer, J. Nieto, R. Siegwart, and C. Cadena (2017), “An Online Multi-robot SLAM System for 3D LiDARs,” In IROS, pp. 1004–1011.
[42] P. Geneva, K. Eckenhoff, Y. Yang, and G. Huang (2018), “LIPS: LiDAR-Inertial 3D Plane SLAM,” In IROS, pp. 123–130.
[43] T. Barfoot and P. Furgale (2014), “Associating Uncertainty With Three-Dimensional Poses for Use in Estimation Problems,” IEEE T-RO, vol. 30, no. 3, pp. 679–693.
[44] A. W. Long, K. C. Wolfe, M. J. Mashner, and G. S. Chirikjian (2013), “The Banana Distribution Is Gaussian: A Localization Study with Exponential Coordinates,” Robotics: Science and Systems VIII, vol. 265.
[45] F. Lu (1995), “Shape Registration Using Optimization for Mobile Robot Navigation,” PhD. thesis, Dept. C.S., University of Toronto.
[46] P. Biber and W. Strasser (2003), “The Normal Distributions Transform: A New Approach to Laser Scan Matching,” In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS).
[47] W. Vega-Brown, A. Bachrach, A. Bry, J. Kelly, and N. Roy, “CELLO: A fast algorithm for Covariance Estimation,” In IEEE Int. Conf. on Robotics and Automation, pp. 3160–3167.
[48] S. Rusinkiewicz, B. Brown, and M. Kazhdan (2005), “3D Scan Matching and Registration”, ICCV Short Course.
[49] W. Qingshan, Z. Jun (2019), “Point Cloud Registration Algorithm Based on Combination of NDT and PLICP”, In Proc. of the 15th Int. Conf. on Computational Intelligence and Security (CIS), pp. 132-136.
[50] https://projects.asl.ethz.ch/datasets/doku.php?id=laserregistration:laserregistration, ASL Datasets, Challenging datasets for point cloud registration algorithms, 2022.
[51] https://zhuanlan.zhihu.com/p/88771394, SE(3) and se(3), 2022.
[52] https://www.nuscenes.org/tracking?externalData=all&mapData=all&modalities=Any, nuScenes Tracking Task, 2022.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *