|
[1] G. Little, S. Krishna, J. Black, and S. Panchanathan, “A methodology for evaluating robustness of face recognition algorithms with respect to variations in pose angle and illumination angle,” in Proceedings.(ICASSP’05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005., vol. 2, pp. ii–89, IEEE, 2005. [2] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE conference on com puter vision and pattern recognition, pp. 815–823, 2015. [3] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A discriminative feature learning approach for deep face recognition,” in European conference on computer vision, pp. 499– 515, Springer, 2016. [4] W. Liu, Y. Wen, Z. Yu, and M. Yang, “Largemargin softmax loss for convolutional neural networks.,” in ICML, vol. 2, p. 7, 2016. [5] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “Sphereface: Deep hyper sphere embedding for face recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 212–220, 2017. [6] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu, “Cosface: Large margin cosine loss for deep face recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5265–5274, 2018. [7] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4690–4699, 2019. [8] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4510–4520, 2018. [9] M. Mehdipour Ghazi and H. Kemal Ekenel, “A comprehensive analysis of deep learning based representation for face recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 34–41, 2016. [10] W. W. Bledsoe, “The model method in facial recognition,” Panoramic Research Inc., Palo Alto, CA, Rep. PR1, vol. 15, no. 47, p. 2, 1966. [11] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to humanlevel performance in face verification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1701–1708, 2014. [12] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing sys tems, pp. 1097–1105, 2012. [13] M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, B. C. Van Esesn, A. A. S. Awwal, and V. K. Asari, “The history began from alexnet: A comprehensive survey on deep learning approaches,” arXiv preprint arXiv:1803.01164, 2018. [14] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recog nition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. [16] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual net works,” in European conference on computer vision, pp. 630–645, Springer, 2016. [17] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transforma tions for deep neural networks,” in Proceedings of the IEEE conference on com puter vision and pattern recognition, pp. 1492–1500, 2017. [18] H. Zhang, C. Wu, Z. Zhang, Y. Zhu, H. Lin, Z. Zhang, Y. Sun, T. He, J. Mueller, R. Manmatha, et al., “Resnest: Splitattention networks,” arXiv preprint arXiv:2004.08955, 2020. [19] A. K. Jain, P. Flynn, and A. A. Ross, Handbook of biometrics. Springer Science & Business Media, 2007. [20] P. J. Grother, M. L. Ngan, and G. W. Quinn, “Face in video evaluation (five) face recognition of noncooperative subjects,” tech. rep., 2017. [21] T. Berg and P. Belhumeur, “Poof: Partbased onevs.one features for finegrained categorization, face verification, and attribute estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 955–962, 2013. [22] J. Krause, T. Gebru, J. Deng, L.J. Li, and L. FeiFei, “Learning features and parts for finegrained recognition,” in 2014 22nd International Conference on Pattern Recognition, pp. 26–33, IEEE, 2014. [23] Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identificationverification,” in Advances in neural information processing systems, pp. 1988–1996, 2014. [24] Y. Sun, X. Wang, and X. Tang, “Deep learning face representation from predicting 10,000 classes,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1891–1898, 2014. [25] Y. Sun, X. Wang, and X. Tang, “Deeply learned face representations are sparse, selective, and robust,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2892–2900, 2015. [26] Y. Sun, D. Liang, X. Wang, and X. Tang, “Deepid3: Face recognition with very deep neural networks,” arXiv preprint arXiv:1502.00873, 2015. [27] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” 2015. [28] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM computing surveys (CSUR), vol. 35, no. 4, pp. 399–458, 2003. [29] S. Sengupta, J.C. Chen, C. Castillo, V. M. Patel, R. Chellappa, and D. W. Jacobs, “Frontal to profile face verification in the wild,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–9, IEEE, 2016. [30] H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua, “A convolutional neural network cascade for face detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5325–5334, 2015. [31] R. Ranjan, V. M. Patel, and R. Chellappa, “Hyperface: A deep multitask learning framework for face detection, landmark localization, pose estimation, and gender recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 1, pp. 121–135, 2017. [32] H. Sadeghi, A.A. Raie, and M.R. Mohammadi, “Facial expression recognition using geometric normalization and appearance representation,” in 2013 8th Ira nian Conference On Machine Vision and Image Processing (MVIP), pp. 159–163, IEEE, 2013. [33] X. Zhu, X. Liu, Z. Lei, and S. Z. Li, “Face alignment in full pose range: A 3d total solution,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 1, pp. 78–92, 2017. [34] X. Jin and X. Tan, “Face alignment inthewild: A survey,” Computer Vision and Image Understanding, vol. 162, pp. 1–22, 2017. [35] Y. Wu and Q. Ji, “Facial landmark detection: A literature survey,” International Journal of Computer Vision, vol. 127, no. 2, pp. 115–142, 2019. [36] T.Y. Yang, Y.T. Chen, Y.Y. Lin, and Y.Y. Chuang, “Fsanet: Learning fine grained structure aggregation for head pose estimation from a single image,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1087–1096, 2019. [37] M. Wang and W. Deng, “Deep face recognition: A survey,” arXiv preprint arXiv:1804.06655, 2018. [38] M. Gunther, S. Cruz, E. M. Rudd, and T. E. Boult, “Toward openset face recog nition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 71–80, 2017. [39] T. Ahonen, A. Hadid, and M. Pietikäinen, “Face recognition with local binary pat terns,” in European conference on computer vision, pp. 469–481, Springer, 2004. [40] C. Geng and X. Jiang, “Face recognition using sift features,” in 2009 16th IEEE international conference on image processing (ICIP), pp. 3313–3316, IEEE, 2009. [41] O. Déniz, G. Bueno, J. Salido, and F. De la Torre, “Face recognition using histograms of oriented gradients,” Pattern Recognition Letters, vol. 32, no. 12, pp. 1598–1603, 2011. [42] D. S. Trigueros, L. Meng, and M. Hartnett, “Face recognition: From traditional to deep learning methods,” arXiv preprint arXiv:1811.00116, 2018. [43] J. Hu, L. Shen, and G. Sun, “Squeezeandexcitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132–7141, 2018. [44] Y. Guo, L. Zhang, Y. Hu, X. He, and J. Gao, “Msceleb1m: A dataset and bench mark for largescale face recognition,” in European Conference on Computer Vi sion, pp. 87–102, Springer, 2016. [45] V. Blanz and T. Vetter, “A morphable model for the synthesis of 3d faces,” in Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pp. 187–194, 1999. [46] T. Hassner, S. Harel, E. Paz, and R. Enbar, “Effective face frontalization in uncon strained images,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4295–4304, 2015. [47] X. Zhu, Z. Lei, J. Yan, D. Yi, and S. Z. Li, “Highfidelity pose and expression nor malization for face recognition in the wild,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 787–796, 2015. [48] I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, pp. 2672–2680, 2014. [49] R. Huang, S. Zhang, T. Li, and R. He, “Beyond face rotation: Global and local perception gan for photorealistic and identity preserving frontal view synthesis,” in Proceedings of the IEEE international conference on computer vision, pp. 2439– 2448, 2017. [50] X. Yin, X. Yu, K. Sohn, X. Liu, and M. Chandraker, “Towards largepose face frontalization in the wild,” in Proceedings of the IEEE international conference on computer vision, pp. 3990–3999, 2017. [51] L. Perez and J. Wang, “The effectiveness of data augmentation in image classifi cation using deep learning,” arXiv preprint arXiv:1712.04621, 2017. [52] C. Huang, Y. Li, C. C. Loy, and X. Tang, “Learning deep representation for imbal anced classification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5375–5384, 2016. [53] H. Liu, X. Zhu, Z. Lei, and S. Z. Li, “Adaptiveface: Adaptive margin and sampling for face recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11947–11956, 2019. [54] J. Zhao, L. Xiong, J. Karlekar, J. Li, F. Zhao, Z. Wang, S. Pranata, S. Shen, S. Yan, and J. Feng, “Dualagent gans for photorealistic and identity preserving profile face synthesis.,” in NIPS, vol. 2, p. 3, 2017. [55] Y. Shen, P. Luo, J. Yan, X. Wang, and X. Tang, “Faceidgan: Learning a symme try threeplayer gan for identitypreserving face synthesis,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 821–830, 2018. [56] J. Deng, S. Cheng, N. Xue, Y. Zhou, and S. Zafeiriou, “Uvgan: Adversarial fa cial uv map completion for poseinvariant face recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7093–7102, 2018. [57] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Van houcke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015. [58] L. Tran, X. Yin, and X. Liu, “Disentangled representation learning gan for pose invariant face recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1415–1424, 2017. [59] J. Zhao, Y. Cheng, Y. Xu, L. Xiong, J. Li, F. Zhao, K. Jayashree, S. Pranata, S. Shen, J. Xing, et al., “Towards pose invariant face recognition in the wild,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2207–2216, 2018. [60] K. Cao, Y. Rong, C. Li, X. Tang, and C. C. Loy, “Poserobust face recognition via deep residual equivariant mapping,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5187–5196, 2018. [61] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015. [62] J. Park, S. Woo, J. Lee, and I. S. Kweon, “BAM: bottleneck attention module,” in British Machine Vision Conference 2018, BMVC 2018, Newcastle, UK, September 36, 2018, p. 147, BMVA Press, 2018. [63] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification,” in Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015. [64] D. Han, J. Kim, and J. Kim, “Deep pyramidal residual networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5927–5935, 2017. [65] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258, 2017. [66] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. An dreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017. [67] X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient con volutional neural network for mobile devices,” in Proceedings of the IEEE confer ence on computer vision and pattern recognition, pp. 6848–6856, 2018. [68] S. Woo, J. Park, J.Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), pp. 3–19, 2018. [69] G. B. Huang, M. Mattar, T. Berg, and E. LearnedMiller, “Labeled faces in the wild: A database forstudying face recognition in unconstrained environments,” in Workshop on faces in’RealLife’Images: detection, alignment, and recognition, 2008. [70] S. Moschoglou, A. Papaioannou, C. Sagonas, J. Deng, I. Kotsia, and S. Zafeiriou, “Agedb: the first manually collected, inthewild age database,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 51–59, 2017. [71] T. Zheng and W. Deng, “Crosspose lfw: A database for studying crosspose face recognition in unconstrained environments,” Beijing University of Posts and Telecommunications, Tech. Rep, pp. 18–01, 2018. [72] T. Zheng, W. Deng, and J. Hu, “Crossage lfw: A database for study ing crossage face recognition in unconstrained environments,” arXiv preprint arXiv:1708.08197, 2017. [73] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., “Imagenet large scale visual recogni tion challenge,” International journal of computer vision, vol. 115, no. 3, pp. 211– 252, 2015. [74] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Des maison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” in NIPS Workshop, 2017. [75] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Rai son, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, highperformance deep learning library,” in Ad vances in Neural Information Processing Systems 32 (H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchéBuc, E. Fox, and R. Garnett, eds.), pp. 8024–8035, Curran Associates, Inc., 2019. [76] Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “Vggface2: A dataset for recognising faces across pose and age,” in 2018 13th IEEE International Con ference on Automatic Face & Gesture Recognition (FG 2018), pp. 67–74, IEEE, 2018.
|