帳號:guest(3.146.152.23)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):張屹良
作者(外文):Chang, Yi-Liang
論文名稱(中文):針對行車紀錄器影片的半自動標註工具
論文名稱(外文):A semi-automatic annotation tool for dashboard camera video labeling base on multi object tracker
指導教授(中文):金仲達
指導教授(外文):King, Chung-Ta
口試委員(中文):林嘉文
朱宏國
口試委員(外文):Lin, Chia-Wen
Chu, Hung-Kuo
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系所
學號:104062624
出版年(民國):107
畢業學年度:106
語文別:英文
論文頁數:22
中文關鍵詞:半自動標註行車紀錄器影片
外文關鍵詞:semi-automaticannotationdashboardcameravideo
相關次數:
  • 推薦推薦:0
  • 點閱點閱:257
  • 評分評分:*****
  • 下載下載:6
  • 收藏收藏:0
近年自動駕駛車研究蓬勃發展的原因,來自於神經網路的突破,也就是深度學習,這樣的技術可以幫助車輛在道路上識別並追蹤物體。為了應用深度學習技術,需要大量正確標註過的影片作為基準來訓練神經網路。雖然從行車紀錄器上收集大量的道路影片並非困難之事,但標註影片卻是件乏味繁瑣的工作,而這件事目前還是依靠人來完成。
在本文中,我們研究了一種標註影片的半自動方法,這個方法是先利用存在的自動物體追蹤器來自動標註,並經由人為來修正追蹤器的結果。注意這些已經存在的自動物體追蹤器很可能是基於某些標註影片經由深度學習技術打造而成的。不幸的是,沒有一個物體追蹤器可以確保識別追蹤後的結果可以如同人類一樣好,他們都會有些錯誤。在我們的半自動方法中,這些錯誤都是經由人類來修正的。這種方法的關鍵問題是如何檢測追踪器所產生的錯誤,並提供指南來引導人類修正這些錯誤。
在本文中,我們提出了檢查影片之中由追踪器標註的運動特徵,並檢查可能導致錯誤標註的條件。這些有問題的幀被識別並呈現給人類標註者以檢查和糾正。我們還基於物件位置和生命週期來驗證由此工具產生的準確性。總體而言,該工具可以節省被識別的物件60%的工作,同時確保識別物件的數據準確性。
The recent surge in the development of self-driving cars stems mainly from the breakthrough in the artificial neuron networks, a.k.a. the deep learning technique, for recognizing and tracking objects on the roads. To apply the deep learning technique, a large set of properly annotated videos, which serve as the ground truth, are needed to train the neuron network. Although it is not difficult to collect a large amount of road videos from the dashboard cameras of cars, it is very time-consuming and tedious to annotate the videos. So far, this task mainly relies on human to annotate.
In this thesis, we study a semi-automatic approach to video annotation, which annotates objects in the video using existing automatic object trackers and then corrects possible errors by human. Note that existing automatic object trackers are likely developed using the deep learning technique based on a certain set of annotated videos. Unfortunately, no object tracker can guarantee object recognition and tracking as good as human. They will still make mistakes. In our semi-automatic approach, these errors are corrected by human. The key issue in this approach is how to detect the errors made by the object trackers and guide the human annotators through the correction process.
In this thesis, we propose to examine motion features in the video that has been annotated by the object trackers and check for conditions that may lead to erroneous annotations.

Such problematic frames are identified and presented to the human annotators to check and correct. We also verify the accuracy resulted from this tool based on the object location and the life time period. Overall, this tool can save 60% of works while ensuring data accuracy on recognized objects.
1 Introduction 1
2 Related Work 5
2.1 Semi-automatic Annotation Tools 5
2.2 Multiple Object Tracker 6
3 Methodology 10
4 Evaluation 14
4.1 Accuracy 15
4.2 Performance 16
5 Conclusion 18
[1] Silvio Savarese Yu Xiang, Alexandre Alahi, \Learning to track: Online multi-object
tracking by decision making", in IEEE International Conference on Computer Vision,
2015.
[2] A. Dick I. Reid K. Schindler A. Milan, S. H. Rezato ghi, \Online multi-target tracking
using recurrent neural networks", in Association for the Advancement of Arti cial
Intelligence, 2016.
[3] Ming-Hsuan Yang Ju Hong Yoon, Chang-Ryeol Lee, \Online multi-object tracking via
structural constraint event aggregation", in Computer Vision and Pattern Recognition,
2016.
[4] \Trac accident statistics", http://www.after-car-accidents.com/
car-accident-causes.html, [Online; accessed 06-July-2016].
[5] \Andrew keen (2013), the future of travel: How driverless cars could change ev-
erything, cnn business traveler", http://edition.cnn.com/2013/05/13/business/
business-travellertransportationfuturecast, [Online; accessed 15-May-2013].
6] Danielle Dai Daniel Howard, \Public perceptions of self-driving cars: The case of
berkeley, california", in Prepared for the 93rd Annual Meeting of the Transportation
Research Board, 2013.
[7] yunChul Shim Unghui Lee, Sangyol Yoon, \Local path planning in a complex environ-
ment for self-driving car", in Cyber Technology in Automation, Control, and Intelligent
Systems, 2014.
[8] Sze Zheng Yong Dmitry Yershov Emilio Frazzoli Brian Paden, Michal, \A survey
of motion planning and control techniques for self-driving urban vehicles", in IEEE
Transactions on Intelligent Vehicles, 2016.
[9] Hothaifa Al-Qassab Hayder Radha Mohammed Al-Qizwini, Iman Barjasteh, \Deep
learning algorithm for autonomous driving using googlenet.", in Intelligent Vehicles
Symposium, 2017.
[10] Richard Socher Jia Deng, Wei Dong, \Imagenet: a large-scale hierarchical image
database.", in Computer Vision and Pattern Recognition, 2009.
[11] S. Ramos T. Rehfeld M. Enzweiler R. Benenson U. Franke S. Roth B. Schiele M. Cordts,
M. Omran, \The cityscapes dataset for semantic urban scene understanding", in
Computer Vision and Pattern Recognition, 2016.
[12] Deva Ramanan Carl Vondrick, Donald Patterson, \Eciently scaling up crowdsourced
video annotation a set of best practices for high quality", in Economical Video Labeling
in International Journal of Computer Vision, 2013.
[13] Sergiu Nedevschi Andra Petrovai, Arthur D. Costea, \Semi-automatic image annotation
of street scenes", in Intelligent Vehicles Symposium, 2017.
[14] R. Nevatia C. Huang, Y. Li, \Multiple target tracking by learningbased hierarchical
association of detection responses", in IEEE Transactions on Pattern Analysis and
Machine Intelligence, 2013.
[15] W. Song Z. Xi, D. Xu, \A * algorithm with dynamic weights for multiple object tracking
in video sequence", in Optik International Journal for Light and Electron Optics, 2015.
[16] Cail B. Tao C. Fan L., Wang Z., \A survey on multiple object tracking algorithm", in
IEEE International Conference on Information and Automation, 2016.
[17] K.-J. Yoon. S.-H. Bae, \Robust online multi-object tracking based on tracklet con dence
and online discriminative appearance learning.", in IEEE Computer Vision and Pattern
Recognition, 2014.
[18] J. Lim K.-J. Yoon J.-H. Yoon, M.-H. Yang, \Bayesian multiobject tracking using mo-
tion context from multiple objects.", in Knowledge-Driven Multimedia Information
Extraction and Ontology Evolution, 2015.
[19] M. Sznaier. C. Dicle, O. I. Camps, \The way they move: Tracking multiple targets with
similar appearance.", in IEEE International Conference on Computer Vision, 2013.
[20] Rainer Stiefelhagen Keni Bernardin, \Evaluating multiple object tracking performance:
the clear mot metrics", in Journal on Image and Video Processing, 2008.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *