帳號:guest(216.73.216.146)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):陳冠霖
作者(外文):Chen, Kuan-Lin
論文名稱(中文):自駕車之直觀感知演算法
論文名稱(外文):Direct Perception Algorithms for Autonomous Driving
指導教授(中文):劉晉良
指導教授(外文):Liu, Jinn-Liang
口試委員(中文):李金龍
陳仁純
口試委員(外文):Li, Chin-Lung
Chen, Ren-Chuen
學位類別:碩士
校院名稱:國立清華大學
系所名稱:計算與建模科學研究所
學號:106026504
出版年(民國):108
畢業學年度:107
語文別:英文
論文頁數:30
中文關鍵詞:自駕車直觀感知深度學習
外文關鍵詞:Autonomous DrivingDirect PerceptionDeep Learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:193
  • 評分評分:*****
  • 下載下載:27
  • 收藏收藏:0
基於由普林斯頓大學Chen et al. in Proc. IEEE Int. Conf. Comput. Vis., 2722-2730, 2015 [1] 之論文提出的「自駕車之直觀感知方法」,這篇論文提出一個更完善的直觀感知架構與車輛控制演算法。我們在自駕車模擬平台TORCS之下設計出新的控制程式,並建構一個資料庫由圖片以及相對應5個特徵值組成,這是由13個特徵值版本[1]改良的。由我建構出來的資料庫來進行AlexNet之卷積神經網路(convolutional neural network) 迴歸訓練與執行自駕車模擬測試。我也採用不同的網路架構GoogLeNet進行訓練,並與AlexNet比較結果。實驗結果顯示兩者的training loss皆收斂相當完美,而且實際跑模擬顯示CNN自駕車能夠在未經模型學習過的賽道成功進行自動駕駛。
Based on the direct perception approach of autonomous driving proposed by Chen et al. in Proc. IEEE Int. Conf. Comput. Vis., 2722-2730, 2015 [1], we propose a more general direct perception framework and control algorithm in this thesis. We design a new controller in TORCS simulator and use it to collect a dataset of new images with sensors and 5 affordance indicators as compared to 13 indicators in [1]. We then use the dataset that I have generated to develop a CNN (convolutional neural network) algorithm in AlexNet for regression training and self-driving testing. I also trained a GoogLeNet (a different CNN) algorithm and compare it with AlexNet. The training loss of both algorithms converges satisfactorily and testing results show that the self-driving CNN car can successfully run on different tracks unseen by our pre-trained models.
Contents
Abstract i
Acknowledgment iii
1 Introduction 1
1.1 Three General Approaches for Autonomous Driving . . . . . . . 1
1.2 Direct Perception . . . . . . . . . . . . . . . . . . .. . . . 2
1.3 DeepDriving Platform . . . . . . . . . . . . . . . . . . . . . 3
2 System Architecture 5
2.1 TORCS . . . . . . . . . . . . . . . . . . .. . . . . . . . . . 5
2.2 CAFFE . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 6
2.3 Autonomous Driving . . . . . . . . . . . . . . . . . . . . . . 6
3 Controllers 8
3.1 Bernhard’s Controller . . . . . . . . . . . . .. . . . . . . . 8
3.2 DeepDriving Controller (13 Indicators) . . . . . . . .. . . .. 9
3.3 Our Controller (5 Indicators) . . . . . . . . . . . .. . . . . 9
4 Parameter Definitions 11
5 Data Generation 14
5.1 Data Collecting Procedure . . . . . . . . . . . . . . . . . . 14
5.2 Key Elements in Data Generation . . . . . . . . . . . . . . . 15
6 CNN Models 17
6.1 AlexNet . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.2 GoogLeNet . . . . . . . . . . . . . . . . . . .. . . . . . . .18
7 Results 20
7.1 Training Results . . . . . . . . . . . . . . . .. . . . . . . 20
7.2 Autonomous Driving Performance Evaluation . . . . . . . . . . 22
7.3 Comparison of MAE between AlexNet and GoogLeNet . . . . . . . 24
8 Conclusions 26
Appendix-Controller Algorithms 27
References 30
References

[1] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, DeepDriving: Learning affordance for direct perception in autonomous driving, Proc. IEEE Int. Conf. Comput. Vis., 2722-2730, 2015.

[2] C. Chen, Extracting Cognition out of Images for the Purpose of Autonomous Driving, Ph.D. Thesis, Princeton University, USA, 2016.

[3] M. Al-Qizwini, et al., Deep learning algorithm for autonomous driving using GoogLeNet, IEEE Intelligent Vehicles Symposium (IV), 89-96, 2017.

[4] B. Wymann, et al., TORCS: The open racing car simulator, Software available at http://torcs.sourceforge.net 4.6, 2000.

[5] Wikipedia Caffe (software)
available at https://en.wikipedia.org/wiki/Caffe_(software)

[6] Wikipedia AlexNet
available at https://en.wikipedia.org/wiki/AlexNet

[7] Wikipedia Convolutional neural network
available at https://en.wikipedia.org/wiki/Convolutional_neural_network
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *