|
[1] Bansal, A., Nanduri, A., Castillo, C. D., Ranjan, R., and Chellappa, R. Umdfaces: An annotated face dataset for training deep networks. arXiv preprint arXiv:1611.01484v2 (2016). [2] Bao, J., Chen, D., Wen, F., Li, H., and Hua, G. Towards open-set identity preserving face synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 6713–6722. [3] Chang, H., Lu, J., Yu, F., and Finkelstein, A. Pairedcyclegan: Asymmetric style transfer for applying and removing makeup. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 40–48. [4] Chen, S., Liu, Y., Gao, X., and Han, Z. Mobilefacenets: Efficient cnns for accurate real-time face verification on mobile devices. In Chinese Conference on Biometric Recognition (2018), Springer, pp. 428–438. [5] Choi, Y., Choi, M., Kim, M., Ha, J.-W., Kim, S., and Choo, J. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 8789–8797. [6] Deng, J., Guo, J., Xue, N., and Zafeiriou, S. Arcface: Additive angular margin loss for deep face recognition. arXiv preprint arXiv:1801.07698 (2018). [7] Gao, R., and Grauman, K. On-demand learning for deep image restoration. In ICCV (2017). [8] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems (2014), pp. 2672–2680. [9] Guo, J., Zhu, X., Lei, Z., and Li, S. Z. Face synthesis for eyeglass-robust face recognition. In Chinese Conference on Biometric Recognition (2018), Springer, pp. 275–284. [10] He, Z., Zuo, W., Kan, M., Shan, S., and Chen, X. Arbitrary facial attribute editing: Only change what you want. arXiv preprint arXiv:1711.10678 1, 3 (2017). [11] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems (2017), pp. 6626–6637. [12] Huang, G. B., Ramesh, M., Berg, T., and Learned-Miller, E. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Tech. Rep. 07-49, University of Massachusetts, Amherst, October 2007. [13] Iizuka, S., Simo-Serra, E., and Ishikawa, H. Globally and locally consistent image completion. ACM Transactions on Graphics (ToG) 36, 4 (2017), 107. [14] Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (2017), pp. 1125–1134. [15] Jo, Y., and Park, J. Sc-fegan: Face editing generative adversarial network with user’s sketch and color. arXiv preprint arXiv:1902.06838 (2019). [16] Kemelmacher-Shlizerman, I., Seitz, S. M., Miller, D., and Brossard, E. The megaface benchmark: 1 million faces for recognition at scale. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016), pp. 4873–4882. [17] Kingma, D. P., and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013). [18] Lee, C.-H., Liu, Z., Wu, L., and Luo, P. Maskgan: Towards diverse and interactive facial image manipulation. Technical Report (2019). [19] Li, M., Zuo, W., and Zhang, D. Deep identity-aware transfer of facial attributes. arXiv preprint arXiv:1610.05586 (2016). [20] Liu, G., Reda, F. A., Shih, K. J., Wang, T.-C., Tao, A., and Catanzaro, B. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV) (2018), pp. 85–100. [21] Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision (2015), pp. 3730–3738. [22] Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., and Paul Smolley, S. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 2794–2802. [23] Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., and Efros, A. A. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 2536–2544. [24] Ronneberger, O., Fischer, P., and Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (2015), Springer, pp. 234–241. [25] Sangkloy, P., Lu, J., Fang, C., Yu, F., and Hays, J. Scribbler: Controlling deep image synthesis with sketch and color. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 5400–5409. [26] Shen, W., and Liu, R. Learning residual images for face attribute manipulation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 4030–4038. [27] Upchurch, P., Gardner, J., Pleiss, G., Pless, R., Snavely, N., Bala, K., and Weinberger, K. Deep feature interpolation for image content changes. In Proceedings of the IEEE conference on computer vision and pattern recognition (2017), pp. 7064–7073. [28] Wu, C., Liu, C., Shum, H.-Y., Xy, Y.-Q., and Zhang, Z. Automatic eyeglasses removal from face images. IEEE transactions on pattern analysis and machine intelligence 26, 3 (2004), 322–336. [29] Xiao, T., Hong, J., and Ma, J. Elegant: Exchanging latent encodings with gan for transferring multiple face attributes. In Proceedings of the European Conference on Computer Vision (ECCV) (2018), pp. 168–184. [30] Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., and Li, H. High-resolution image inpainting using multi-scale neural patch synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017), pp. 6721–6729. [31] Yu, C., Wang, J., Peng, C., Gao, C., Yu, G., and Sang, N. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV) (2018), pp. 325–341. [32] Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., and Huang, T. S. Free-form image inpainting with gated convolution. arXiv preprint arXiv:1806.03589 (2018). [33] Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., and Huang, T. S. Generative image inpainting with contextual attention. arXiv preprint arXiv:1801.07892 (2018). [34] Zhang, G., Kan, M., Shan, S., and Chen, X. Generative adversarial network with spatial attention for face attribute editing. In Proceedings of the European Conference on Computer Vision (ECCV) (2018), pp. 417–432. [35] Zhang, L., Ji, Y., Lin, X., and Liu, C. Style transfer for anime sketches with enhanced residual u-net and auxiliary classifier gan. In 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR) (2017), IEEE, pp. 506–511. [36] Zhang, R., Zhu, J.-Y., Isola, P., Geng, X., Lin, A. S., Yu, T., and Efros, A. A. Real-time user-guided image colorization with learned deep priors. arXiv preprint arXiv:1705.02999 (2017). [37] Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (2017), pp. 2223–2232.
|