帳號:guest(3.15.192.196)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):楊承翰
作者(外文):Yang, Cheng-Han
論文名稱(中文):以靜態影像及動態影片為來源之浮現錯覺合成
論文名稱(外文):Emergence Illusion Synthesis using Still Images and Dynamic Videos
指導教授(中文):朱宏國
指導教授(外文):Chu, Hung Kuo
口試委員(中文):李潤容
姚智原
莊永裕
口試委員(外文):Lee, Ruen Rone
Yao, Chih Yuan
Chuang, Yung Yu
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:102062702
出版年(民國):105
畢業學年度:104
語文別:中文
論文頁數:60
中文關鍵詞:浮現錯覺影像非相片質感渲染生物運動完形心理學電腦識別影片追蹤驗證碼
外文關鍵詞:EmergenceIllustration ArtNon-Photorealistic RenderingBiological MotionPattern RecognitionGestalttheorieVideo TrackingCAPTCHA
相關次數:
  • 推薦推薦:0
  • 點閱點閱:361
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
所謂浮現影像,是一種在一陣視覺雜訊中,人類能夠快速與自動的感知與歸納其中主體資訊的一種錯覺圖片。浮現影片則是浮現影像的動態延伸,在播放時人類僅能夠識別出影片中正在運動的物體,而該物體在靜止時則是一片雜亂無章的訊息。同時,浮現影像對於電腦的影像識別或影片追蹤技術皆有一定的干擾性,使得電腦無法識別該類影象或圖片,而此現象皆可以透過一些視覺心裡學的原則分析與歸納之。
近年來,電腦圖學興起一股藉由設計自動化演算法來重製大量錯覺藝術
的熱潮,將特別的錯覺藝術套用在照片、影片及動畫當中。本研究提出一套產生浮現像與浮現影片的系統,透過使用者輸入或是網路爬圖系統,即能自動並大量化的合成浮現影像與浮現影片。經過本研究的實驗證實,浮現影像具有成為新一代驗證碼(CAPTCHA)的潛力,亦能成為視覺心裡學中,研究生物視覺判別的生物運動(Biological Motion)理論的重要素材,另外,該類型的圖片及影片亦可成為非相片質感渲染技術中的一種錯覺風格類型。
Emerging image is the visual phenomenon allowing spectators to recognize the objects in a seemingly meaningless image by aggregating and perceiving information that is meaningful as a whole. Emerging video refers to emerging images in a dynamic range. While watching emerging video, the spectators can only distinguish the meaningful object when the clip is playing. Otherwise, what they see would be just some disorganized information. The ability of perceiving comprehensive information in either emerging image or video renders emergence an effective scheme to tell humans apart from machines, which can also been studied and analyzed through visual psychology.
Recently, in the field of Computer Graphics, designing the automation algorithms to reproduce the illusion of art and applying such special algorithms in photography, video and animations become rather prevalent. The thesis provides a relatively systematical and automatic way to generate and synthesize emerging image and video through the user input and the Web crawler. Moreover, as the thesis proved, it is believed that emerging image has the potential of being a promising and pioneering CAPTCHA system. Also, based on the thesis’ suggestion, emerging video can server as one of the important research materials while studying Biological Motion and as an illustration style of non-photorealistic rendering.
Contents
中文􁄔要i
Abstract ii
致謝iii
Contents iv
List of Figures vii
List of Tables ix
1 論文介紹1
1.1 前言. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 研究背景. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 研究動機. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 論文架構. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 相關研究7
2.1 結合光學錯覺和電腦圖學技術之研究. . . . . . . . . . . . . . . . . . . . . 7
2.2 影像與影片物體切割研究. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 影像切割研究. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 影片物體切割研究. . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 影片物體追蹤研究. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 非相片質感渲染影片(Non-Photorealistic Rendering on Video) . . . . . . . . 11
2.4.1 區域形變(Region Base Warping) . . . . . . . . . . . . . . . . . . . . 11
2.4.2 線段上的貼圖合成(Texture Synthesis on Line Segment) . . . . . . . 11
2.5 完形心理學(Gestalt psychology) . . . . . . . . . . . . . . . . . . . . . . . . 12
2.6 生物運動(Biological Motion) . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 浮現影像系統實作15
3.1 前置處理(Preprocess) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 影像縮放(Image Resize) . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.2 光影強度變化分析(Image Intensity) . . . . . . . . . . . . . . . . . . 17
3.1.3 邊緣分析(Edge Analysis) . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.4 超像素分析(Superpixels Analysis) . . . . . . . . . . . . . . . . . . . 18
3.1.5 影像取樣(Image Sampling) . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 繪製浮現影像(Emerging Images Rendering) . . . . . . . . . . . . . . . . . . 20
3.2.1 輪廓破壞(Contour Perturbation) . . . . . . . . . . . . . . . . . . . . 20
3.2.2 以超像素合成浮現影像結果(Emerging Image Synthesis with Superpixels)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.3 取樣點超像素合成(Sample Points Synthesis with Superpixels) . . . . 23
3.2.4 背景擾亂(Background Clutter) . . . . . . . . . . . . . . . . . . . . . 24
3.3 浮現錯覺難度控制與設計(Difficulty Level Control and Design) . . . . . . . 25
4 浮現影片系統實作27
4.1 前置處理(Preprocess) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2 追蹤策略(Tracking Strategy) . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 邊緣上的線段追蹤(Line Segment Tracking on Contour) . . . . . . . . . . . . 29
4.3.1 浮線影片線段追蹤. . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3.2 線段追蹤(Tracking from Last Frame) . . . . . . . . . . . . . . . . . . 31
4.3.3 線段處理(Line Segment Processing) . . . . . . . . . . . . . . . . . . 31
4.3.4 連續性程度評估(Temporal Coherence Evaluation) . . . . . . . . . . 31
4.4 時序上的泊松分佈取樣法(Temporal Poisson Disk Sampling) . . . . . . . . . 34
4.4.1 取樣演算法(Sampling Algorithm) . . . . . . . . . . . . . . . . . . . 34
4.4.2 最佳化流程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.4.3 時序上的泊松分佈取樣法(Temporal Poisson Disk Sampling) . . . . 37
4.5 浮現影片超像素合成與背景合成. . . . . . . . . . . . . . . . . . . . . . . . 38
4.6 浮現影片難度控制. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.6.1 浮現影片難度控參數設計. . . . . . . . . . . . . . . . . . . . . . . 39
4.6.2 以人類運動為主體的浮現影片難度參數控制. . . . . . . . . . . . . 40
5 使用者測試與實驗結果41
5.1 使用者測試. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.1.1 使用者測試結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.2 電腦視覺辨識. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.2.1 以浮現影像為訓練來源之深度學習電腦識別測試. . . . . . . . . . 42
5.3 浮現影片難度驗證. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3.1 浮現片於靜止影格識別驗証. . . . . . . . . . . . . . . . . . . . . . 43
5.3.2 時間與空間難度對於識別感知難度的影響. . . . . . . . . . . . . . 43
5.3.3 使用者識別. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.3.4 電腦視覺對於浮現影片之追蹤. . . . . . . . . . . . . . . . . . . . . 45
6 結果46
6.1 浮現影像結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.1.1 浮現影像結果展示. . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.2 浮現影片結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.2.1 浮現影片結果展示. . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7 結論53
7.1 總結與貢獻. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.2 限制與問題. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
7.3 未來發展. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Bibliography 56
[1] Curtis L. Baker and Oliver J. Braudick. The basis of area and dot number effects in random
dot motion perception. Vision Research, pages 1253–1259, 1982.
[2] Niloy J. Mitra, Hung-Kuo Chu, Tong-Yee Lee, Lior Wolf, Hezy Yeshurun, and Daniel
Cohen-Or. Emerging images. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 28(5):163:1–
163:8, 2009.
[3] Craig S. Kaplan and David H. Salesin. Escherization, 2000.
[4] Ran Gal, Olga Sorkine, Tiberiu Popa, Alla Sheffer, Daniel Cohen-or, and Tu Berlin. 3d
collage: Expressive non-realistic modeling. In In Proceedings of NPAR 2007, pages 7–14,
2007.
[5] T. Igarashi, N. Max, F. Sillion, Jong chul Yoon, In kwon Lee, and Henry Kang. A hiddenpicture
puzzles generator, 2008.
[6] Antonio Torralba. Hybrid images. ACM Transactions on Graphics (TOG, 25, 2006.
[7] Ming-Te Chi, Tong-Yee Lee, Yingge Qu, and Tien-Tsin Wong. Self-animating images:
Illusory motion using repeated asymmetric patterns. ACM Transactions on Graphics, 27
(3), 2008.
[8] Niloy J. Mitra and Mark Pauly. Shadow art. ACM Trans. Graph., 28(5):156:1–156:7,
December 2009.
[9] A. M. Treisman and G. Gelade. A feature-integration theory of attention. Cognitive Psychology,
12:97–136, 1980.
[10] Hung-Kuo Chu, Wei-Hsin Hsu, Niloy J. Mitra, Daniel Cohen-Or, Tien-Tsin Wong, and
Tong-Yee Lee. Camouflage images. ACM Trans. Graph. (Proc. SIGGRAPH), 29(4):51:1–
51:8, 2010.
[11] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. Grabcut -interactive foreground
extraction using iterated graph cuts. ACM TRANS. GRAPH, pages 309–314, 2004.
[12] M. M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S. M. Hu. Global contrast based
salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence,
37(3):569–582, March 2015.
[13] Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine
Susstrunk. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans.
Pattern Anal. Mach. Intell., 34(11):2274–2282, November 2012.
[14] M. Grundmann, V. Kwatra, M. Han, and I. Essa. Efficient hierarchical graph based video
segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2010.
[15] Jason Chang, Donglai Wei, and John W. Fisher Iii. A video representation using temporal
superpixels, 2013.
[16] Berthold K. P. Horn and Brian G. Schunck. Determining optical flow, 1981.
[17] Anton Andriyenko, Konrad Schindler, and Stefan Roth. Discrete-continuous optimization
for multi-target tracking. In CVPR, 2012.
[18] Jerome Revaud, Zaid Harchaoui, Cordelia Schmid, Jerome Revaud, Zaid Harchaoui, and
Cordelia Schmid. Epicflow: Edge-preserving interpolation of correspondences for optical
flow, 2015.
[19] Liang Lin, Kun Zeng, Han Lv, Yizhou Wang, Yingqing Xu, and Song-Chun Zhu. Painterly
animation using video semantics and feature correspondence. In Proceedings of the 8th International
Symposium on Non-Photorealistic Animation and Rendering, NPAR ’10, New
York, NY, USA, 2010. ACM.
[20] Tinghuai Wang, John Collomosse, David Slatter, Phil Cheatle, and Darryl Greig. Video
stylization for digital ambient displays of home movies. In Proceedings of the 8th International
Symposium on Non-Photorealistic Animation and Rendering, NPAR ’10. ACM,
2010.
[21] Daniel Sýkora, Mirela Ben-Chen, Martin Čadík, Brian Whited, and Maryann Simmons.
Textoons: Practical texture mapping for hand-drawn cartoon animations. In Proceedings
of International Symposium on Non-photorealistic Animation and Rendering, pages 75–83,
2011.
[22] Robert D. Kalnins, Philip L. Davidson, Lee Markosian, and Adam Finkelstein. Coherent
stylized silhouettes. ACM Trans. Graph., July 2003.
[23] Pierre Bénard, Forrester Cole, Aleksey Golovinskiy, and Adam Finkelstein. Self-similar
texture for coherent line stylization. In NPAR 2010: Proceedings of the 8th International
Symposium on Non-photorealistic Animation and Rendering, June 2010.
[24] Pierre Bénard, Jingwan Lu, Forrester Cole, Adam Finkelstein, and Joëlle Thollot. Active
strokes: Coherent line stylization for animated 3d models. In Proceedings of the Symposium
on Non-Photorealistic Animation and Rendering, NPAR ’12. Eurographics Association,
2012.
[25] Nir Ben-Zvi, Jose Bento, Moshe Mahler, Jessica Hodgins, and Ariel Shamir. Line-drawing
video stylization. Computer Graphics Forum, page Article Accepted, 2015.
[26] S. M. Axstis. Phi movement as a subtraction process. Vision Research, 1970.
[27] S. M. Axstis. The perception of apparent motion. Scientific American, 1986.
[28] Sang-Hun Lee and Randolph Blake. Detection of temporal structure depends on spatial
structure. Vision Research, 1999.
[29] GUNNAR JOHANSSON. Visual perception of biological motion and a model for its analysis.
Perception and Psychophysics, 1973.
[30] Vicki Ahlström, Randolph Blake, and Ulf Ahlström. Perception of biological motion. Perception,
1997.
[31] Emily D. Grossman and Randolph Blake. Perception of coherent motion, biological motion
and form-from-motion under dim-light conditions. Vision Research, 1999.
[32] Emily D. Grossman, Lorella Battelli, and Pascual-leone Alvaro. Repetitive tms over posterior
sts disrupts perception of biological motion. Vision Research, 2005.
[33] Jan Eric Kyprianidis and Jürgen Döllner. Image abstraction by structure adaptive filtering.
In Poster at 6th Symposium on Non-Photorealistic Animation and Rendering (NPAR), 2008.
[34] Robert Bridson. Fast poisson disk sampling in arbitrary dimensions. In ACM SIGGRAPH
2007 Sketches, SIGGRAPH ’07. ACM, 2007. ISBN 978-1-4503-4726-6. doi: 10.1145/
1278780.1278807.
[35] Jing Liao, Mark Finch, and Hugues Hoppe. Fast computation of seamless video loops.
ACM Trans. Graph., 34(6):197:1–197:10, October 2015.
[36] Dorita HF Chang and Nikolaus F Troje. Characterizing global and local mechanisms in
biological motion perception. Journal of Vision, 9(5):8–8, 2009.
[37] Benjamin Watson, Alinda Friedman, and Aaron McGaffey. Measuring and predicting visual
fidelity. SIGGRAPH ’01, pages 213–220. ACM, 2001.
[38] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick,
Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast
feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[39] Chao Ma, Jia-Bin Huang, Xiaokang Yang, and Ming-Hsuan Yang. Hierarchical convolutional
features for visual tracking. In Proceedings of the IEEE International Conference
on Computer Vision), 2015.
(此全文限內部瀏覽)
電子全文
摘要
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *