|
[1] Curtis L. Baker and Oliver J. Braudick. The basis of area and dot number effects in random dot motion perception. Vision Research, pages 1253–1259, 1982. [2] Niloy J. Mitra, Hung-Kuo Chu, Tong-Yee Lee, Lior Wolf, Hezy Yeshurun, and Daniel Cohen-Or. Emerging images. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 28(5):163:1– 163:8, 2009. [3] Craig S. Kaplan and David H. Salesin. Escherization, 2000. [4] Ran Gal, Olga Sorkine, Tiberiu Popa, Alla Sheffer, Daniel Cohen-or, and Tu Berlin. 3d collage: Expressive non-realistic modeling. In In Proceedings of NPAR 2007, pages 7–14, 2007. [5] T. Igarashi, N. Max, F. Sillion, Jong chul Yoon, In kwon Lee, and Henry Kang. A hiddenpicture puzzles generator, 2008. [6] Antonio Torralba. Hybrid images. ACM Transactions on Graphics (TOG, 25, 2006. [7] Ming-Te Chi, Tong-Yee Lee, Yingge Qu, and Tien-Tsin Wong. Self-animating images: Illusory motion using repeated asymmetric patterns. ACM Transactions on Graphics, 27 (3), 2008. [8] Niloy J. Mitra and Mark Pauly. Shadow art. ACM Trans. Graph., 28(5):156:1–156:7, December 2009. [9] A. M. Treisman and G. Gelade. A feature-integration theory of attention. Cognitive Psychology, 12:97–136, 1980. [10] Hung-Kuo Chu, Wei-Hsin Hsu, Niloy J. Mitra, Daniel Cohen-Or, Tien-Tsin Wong, and Tong-Yee Lee. Camouflage images. ACM Trans. Graph. (Proc. SIGGRAPH), 29(4):51:1– 51:8, 2010. [11] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. Grabcut -interactive foreground extraction using iterated graph cuts. ACM TRANS. GRAPH, pages 309–314, 2004. [12] M. M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S. M. Hu. Global contrast based salient region detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3):569–582, March 2015. [13] Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aurelien Lucchi, Pascal Fua, and Sabine Susstrunk. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell., 34(11):2274–2282, November 2012. [14] M. Grundmann, V. Kwatra, M. Han, and I. Essa. Efficient hierarchical graph based video segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010. [15] Jason Chang, Donglai Wei, and John W. Fisher Iii. A video representation using temporal superpixels, 2013. [16] Berthold K. P. Horn and Brian G. Schunck. Determining optical flow, 1981. [17] Anton Andriyenko, Konrad Schindler, and Stefan Roth. Discrete-continuous optimization for multi-target tracking. In CVPR, 2012. [18] Jerome Revaud, Zaid Harchaoui, Cordelia Schmid, Jerome Revaud, Zaid Harchaoui, and Cordelia Schmid. Epicflow: Edge-preserving interpolation of correspondences for optical flow, 2015. [19] Liang Lin, Kun Zeng, Han Lv, Yizhou Wang, Yingqing Xu, and Song-Chun Zhu. Painterly animation using video semantics and feature correspondence. In Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, NPAR ’10, New York, NY, USA, 2010. ACM. [20] Tinghuai Wang, John Collomosse, David Slatter, Phil Cheatle, and Darryl Greig. Video stylization for digital ambient displays of home movies. In Proceedings of the 8th International Symposium on Non-Photorealistic Animation and Rendering, NPAR ’10. ACM, 2010. [21] Daniel Sýkora, Mirela Ben-Chen, Martin Čadík, Brian Whited, and Maryann Simmons. Textoons: Practical texture mapping for hand-drawn cartoon animations. In Proceedings of International Symposium on Non-photorealistic Animation and Rendering, pages 75–83, 2011. [22] Robert D. Kalnins, Philip L. Davidson, Lee Markosian, and Adam Finkelstein. Coherent stylized silhouettes. ACM Trans. Graph., July 2003. [23] Pierre Bénard, Forrester Cole, Aleksey Golovinskiy, and Adam Finkelstein. Self-similar texture for coherent line stylization. In NPAR 2010: Proceedings of the 8th International Symposium on Non-photorealistic Animation and Rendering, June 2010. [24] Pierre Bénard, Jingwan Lu, Forrester Cole, Adam Finkelstein, and Joëlle Thollot. Active strokes: Coherent line stylization for animated 3d models. In Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, NPAR ’12. Eurographics Association, 2012. [25] Nir Ben-Zvi, Jose Bento, Moshe Mahler, Jessica Hodgins, and Ariel Shamir. Line-drawing video stylization. Computer Graphics Forum, page Article Accepted, 2015. [26] S. M. Axstis. Phi movement as a subtraction process. Vision Research, 1970. [27] S. M. Axstis. The perception of apparent motion. Scientific American, 1986. [28] Sang-Hun Lee and Randolph Blake. Detection of temporal structure depends on spatial structure. Vision Research, 1999. [29] GUNNAR JOHANSSON. Visual perception of biological motion and a model for its analysis. Perception and Psychophysics, 1973. [30] Vicki Ahlström, Randolph Blake, and Ulf Ahlström. Perception of biological motion. Perception, 1997. [31] Emily D. Grossman and Randolph Blake. Perception of coherent motion, biological motion and form-from-motion under dim-light conditions. Vision Research, 1999. [32] Emily D. Grossman, Lorella Battelli, and Pascual-leone Alvaro. Repetitive tms over posterior sts disrupts perception of biological motion. Vision Research, 2005. [33] Jan Eric Kyprianidis and Jürgen Döllner. Image abstraction by structure adaptive filtering. In Poster at 6th Symposium on Non-Photorealistic Animation and Rendering (NPAR), 2008. [34] Robert Bridson. Fast poisson disk sampling in arbitrary dimensions. In ACM SIGGRAPH 2007 Sketches, SIGGRAPH ’07. ACM, 2007. ISBN 978-1-4503-4726-6. doi: 10.1145/ 1278780.1278807. [35] Jing Liao, Mark Finch, and Hugues Hoppe. Fast computation of seamless video loops. ACM Trans. Graph., 34(6):197:1–197:10, October 2015. [36] Dorita HF Chang and Nikolaus F Troje. Characterizing global and local mechanisms in biological motion perception. Journal of Vision, 9(5):8–8, 2009. [37] Benjamin Watson, Alinda Friedman, and Aaron McGaffey. Measuring and predicting visual fidelity. SIGGRAPH ’01, pages 213–220. ACM, 2001. [38] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [39] Chao Ma, Jia-Bin Huang, Xiaokang Yang, and Ming-Hsuan Yang. Hierarchical convolutional features for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision), 2015.
|