|
[1] W. Di, C. Wah, A. Bhardwaj, R. Piramuthu and N. Sundaresan, “Style Finder: Fine-Grained Clothing Style Recognition and Retrieval”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 8-13, 2013. [2] A. Oliva and A. Torralba, “Modeling the shape of the scene: a holistic representation of the spatial envelope”, International Journal of Conflict and Violence (IJCV), 42(3):145–175, 2001. [3] J. Wang, Y. Cheng and R. S. Feris, “Walk and Learn: Facial Attribute Representation Learning from Egocentric Video and Contextual Data”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 2295 – 2304, 2016. [4] Z. Liu, P. Luo, X. Wang and X. Tang, “Deep Learning Face Attributes in the Wild”, IEEE International Conf. on Computer Vision (ICCV), pp. 3730-3738, 2015. [5] H. Zhang, S. Liu, C. Zhang, W. Ren, R. Wang and X. Cao, “SketchNet: Sketch Classification with Web Images”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 1105-1113, 2016. [6] Q. Yu, F. Liu, Y.-Z. Song, T. Xiang, T. M. Hospedales and C. C. Loy, “Sketch Me That Shoe”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 799-807, 2016. [7] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio, “Generative Adversarial Nets”, Neural Information Processing Systems (NIPS), 2014. [8] M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets”, arXiv: 1411.1784, 2014. [9] P. Isola, J.-Y. Zhu, T. Zhou and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 5967 - 5976 , 2017. [10] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim and J. Choo, “StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018. [11] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang and W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 105 – 114, 2017. [12] S. IIZUKA, E. SIMO-SERRA and H. ISHIKAWA, “Globally and Locally Consistent Image Completion”, ACM Trans. on Graphics (Proceedings of SIGGRAPH), pp. 107:1--107:14, 2017. [13] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele and H. Lee, “Generative Adversarial Text to Image Synthesis”, International Conf. on Machine Learning (ICML), 2016. [14] D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell and A. A. Efros, “Context Encoders: Feature Learning by Inpainting”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 2536 – 2544, 2016. [15] C. Yang, X. Lu, Z. Lin, E. Shechtman, O. Wang and H. Li, “High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 4076 – 4084, 2017. [16] A. Radford, L. Metz and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks”, arXiv: 1511.06434, 2015. [17] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift”, arXiv: 1502.03167, 2015. [18] O. Ronneberger, P. Fischer and T. Brox, “U-net: Convolutional networks for biomedical image segmentation”, in The Medical Image Computing and Computer Assisted Intervention Society (MICCAI), pages 234–241. Springer, 2015. [19] L. Tran, X. Yin and X. Liu, “Disentangled Representation Learning GAN for Pose-Invariant Face Recognition”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 1283 – 1292, 2017. [20] A. Odena, “Semi-Supervised Learning with Generative Adversarial Networks”, arXiv: 1606.01583, 2016. [21] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford and X. Chen, “Improved Techniques for Training GANs”, Advances in Neural Information Processing Systems (NIPS), 2016. [22] A. Yu and K. Grauman, “Fine-Grained Visual Comparisons with Local Learning”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 192 – 199, 2014. [23] A. Yu and K. Grauman, “Semantic Jitter: Dense Supervision for Visual Comparisons via Synthetic Images”, IEEE International Conf. on Computer Vision (ICCV), pp. 5571 – 5580, 2017. [24] J. Canny, “A Computational Approach to Edge Detection”, IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), pp. 679 – 698, 1986. [25] D. Kingma and J. Ba, “Adam: A method for stochastic optimization”, International Conf. on Learning Representations (ICLR), 2015. [26] Q. Yu, Y. Yang, Y.-Z. Song, T. Xiang, and T. Hospedales, “Sketch-a-net that beats humans”, in The British Machine Vision Conference (BMVC), 2015. [27] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and F.-F. Li, “ ImageNet: A large-scale hierarchical image database”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 248 – 255, 2009. [28] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 886 – 893, 2005. [29] A. Vedaldi and B. Fulkerson, VLFeat: An Open and Portable Library of Computer Vision Algorithms, http://www.vlfeat.org/, 2008
|