帳號:guest(18.227.209.84)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):倪滙渝
作者(外文):Ni, Hui-Yu
論文名稱(中文):朝向多樣化的活體特徵表示和跨域人臉反偽造的域擴展
論文名稱(外文):Towards Diverse Liveness Feature Representation and Domain Expansion for Cross-Domain Face Anti-Spoofing
指導教授(中文):許秋婷
指導教授(外文):Hsu, Chiou-Ting
口試委員(中文):王聖智
林彥宇
口試委員(外文):Wang, Sheng-Jyh
Lin, Yen-Yu
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:110062511
出版年(民國):112
畢業學年度:111
語文別:英文
論文頁數:31
中文關鍵詞:人臉防偽造域泛化特徵解耦學習仿射特徵轉換對抗學習
外文關鍵詞:face anti-spoofingdomain generalizationdisentangled feature learningaffine feature transformationadversarial learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:19
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
人臉反偽造(FAS)旨在通過區分真實人臉和偽造人臉,增強臉部身份驗證的安全性。儘管在FAS中,解耦特徵學習取得了很大成功,但解耦特徵空間的表示能力仍然有限,無法擴展到超出訓練領域之外。在本文中,我們提出進一步增強解耦的真實性和領域特徵,並具有兩個目標。我們的第一個目標是豐富真實性特徵的多樣性,以涵蓋各種面部表示攻擊。第二個目標是擴展領域特徵,使其通向良好泛化且未見過的領域。為了實現這兩個目標,我們開發了一個具有兩種特徵增強策略的解纏特徵增強網絡(DFANet),包括仿射特徵轉換(AFT)和對抗領域學習(ADL)。在四個FAS基準數據集上進行的大量實驗表明,所提出的DFANet在大多數跨領域測試協議下優於以前的方法。
Face anti-spoofing (FAS) aims to strengthen security of facial identity authentication by distinguishing live faces from spoof ones.
Although disentangled feature learning has achieved much success in FAS, the representation capacity of disentangled feature space remains limited and does not extend beyond the training domains. In this thesis, we propose to further augment the disentangled liveness and domain features with a two-fold goal.
Our first goal is to enrich the diversity of liveness features so as to encompass a wide range of facial representation attacks. The second goal is to expand the domain features toward well-generalized and unseen domains. To reach the two goals, we develop a Disentangled Feature Augmentation Network (\textbf{DFANet}) with two feature augmentation strategies, including Affine Feature Transformation (AFT) and Adversarial Domain Learning (ADL).
Extensive experiments on four FAS benchmark datasets show that the proposed DFANet outperforms previous methods on most of the protocols under cross-domain testings.
摘要 i
Abstract ii
Acknowledgements

1 Introduction
1
2 Related Work 4
2.1 Face Anti-spoofing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
2.2 Learning-based Data Augmentation . . . . . . . . . . . . . . . . . . . . . . .5
3 Method
7
3.1 Feature Disentanglement and Reconstruction . . . . . . . . . . . . . . . . . .7
3.2 Augmentation of LS Features by Affine Feature Transformation . . . . . . . .9
3.3 Augmentation of Domain Features by Adversarial Domain Learning . . . . . .11
3.4 Model Training of DFANet . . . . . . . . . . . . . . . . . . . . . . . . . . .13
3.5 Live/Spoof Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . .13
4 Experiments 14
4.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
4.1.1 OULU-NPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
4.1.2 MSU-MFSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
4.1.3 CASIA-MFSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
4.1.4 Replay-Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
4.1.5 SiW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
4.1.6 3DMAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
4.1.7 HKBU-MARs V1+ . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
4.1.8 CASIA-3DMask . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
4.2 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
4.3 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
4.4 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19
4.4.1 Different Combinations of Augmented LS and Domain Features . . . .20
4.4.2 t-SNE Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . .20
4.4.3 Activation Visualization . . . . . . . . . . . . . . . . . . . . . . . . .22
4.5 Experimental Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
4.5.1 Cross-Domain Testing . . . . . . . . . . . . . . . . . . . . . . . . . .24
4.5.2 Cross-Domain Testing with Limited Source Domains . . . . . . . . . .25
4.5.3 Cross-Domain Testing on “Unseen”Attack Types . . . . . . . . . .26

5 Conclusion 27
References 28
[1] H.-Y. Tseng, H. Lee, J. Huang, and M. Yang, “Cross-domain few-shot classification via learned feature-wise transformation,” in ICLR, 2020.
[2] K. Zhou, Y. Yang, Y. Qiao, and T. Xiang, “Domain generalization with mixstyle,” in ICLR, 2021.
[3] P.-K. Huang, M.-C. Chin, and C.-T. Hsu, “Face anti-spoofing via robust auxiliary estima- tion and discriminative feature learning,” in Asian Conference on Pattern Recognition, pp. 443–458, Springer, 2022.
[4] K.-Y. Zhang, T. Yao, J. Zhang, Y. Tai, S. Ding, J. Li, F. Huang, H. Song, and L. Ma, “Face anti-spoofing via disentangled representation learning,” in European Conference on Computer Vision, pp. 641–657, Springer, 2020.
[5] Z. Yu, C. Zhao, Z. Wang, Y. Qin, Z. Su, X. Li, F. Zhou, and G. Zhao, “Searching central difference convolutional networks for face anti-spoofing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5295–5305, 2020.
[6] Z. Yu, X. Li, X. Niu, J. Shi, and G. Zhao, “Face anti-spoofing with human material per- ception,” in European Conference on Computer Vision, pp. 557–575, Springer, 2020.
[7] R. Shao, X. Lan, J. Li, and P. C. Yuen, “Multi-adversarial discriminative deep domain generalization for face presentation attack detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10023–10031, 2019.
[8] G. Wang, H. Han, S. Shan, and X. Chen, “Cross-domain face presentation attack detection via multi-domain disentangled representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6678–6687, 2020.
[9] Z. Wang, Z. Wang, Z. Yu, W. Deng, J. Li, T. Gao, and Z. Wang, “Domain generaliza- tion via shuffled style assembly for face anti-spoofing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4123–4133, 2022.
[10] P.-K. Huang, C.-L. Chang, H.-Y. Ni, and C.-T. Hsu, “Learning to augment face presen- tation attack dataset via disentangled feature learning from limited spoof data,” in 2022 IEEE International Conference on Multimedia and Expo (ICME), IEEE, 2022.
[11] Z. Chen, T. Yao, K. Sheng, S. Ding, Y. Tai, J. Li, F. Huang, and X. Jin, “Generalizable rep- resentation learning for mixture domain face anti-spoofing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 1132–1139, 2021.

[12] P.-K. Huang, H.-Y. Ni, Y.-Q. Ni, and C.-T. Hsu, “Learnable descriptive convolutional network for face anti-spoofing.,” in BMVC, 2022.
[13] T. de Freitas Pereira, A. Anjos, J. M. De Martino, and S. Marcel, “Can face anti-spoofing countermeasures work in a real world scenario?,” in 2013 international conference on biometrics (ICB), pp. 1–8, IEEE, 2013.
[14] Z. Boulkenafet, J. Komulainen, and A. Hadid, “Face anti-spoofing based on color texture analysis,” in 2015 IEEE international conference on image processing (ICIP), pp. 2636– 2640, IEEE, 2015.
[15] Z. Boulkenafet, J. Komulainen, and A. Hadid, “Face spoofing detection using colour tex- ture analysis,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 8, pp. 1818–1830, 2016.
[16] Bharadwaj, Dhamecha, Vatsa, and Singh, “Computationally efficient face spoofing de- tection with motion magnification,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 105–110, 2013.
[17] K. Patel, H. Han, and A. K. Jain, “Secure face unlock: Spoof detection on smartphones,” IEEE Transactions on Information Forensics and Security, vol. 11, no. 10, pp. 2268–2283, 2016.
[18] F. Juefei-Xu, V. Naresh Boddeti, and M. Savvides, “Local binary convolutional neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recog- nition, pp. 19–28, 2017.
[19] Z. Wang, Z. Yu, C. Zhao, X. Zhu, Y. Qin, Q. Zhou, F. Zhou, and Z. Lei, “Deep spatial gra- dient and temporal depth learning for face anti-spoofing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5042–5051, 2020.
[20] Z. Yu, J. Wan, Y. Qin, X. Li, S. Z. Li, and G. Zhao, “Nas-fas: Static-dynamic central difference network search for face anti-spoofing,” IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 9, pp. 3005–3023, 2020.
[21] Z. Yu, Y. Qin, H. Zhao, X. Li, and G. Zhao, “Dual-cross central difference network for face anti-spoofing,” in IJCAI, 2021.
[22] B. Chen, W. Yang, H. Li, S. Wang, and S. Kwong, “Camera invariant feature learning for generalized face anti-spoofing,” IEEE Transactions on Information Forensics and Secu- rity, vol. 16, pp. 2477–2492, 2021.
[23] T. Ahonen, A. Hadid, and M. Pietikäinen, “Face recognition with local binary patterns,” in Computer Vision-ECCV 2004: 8th European Conference on Computer Vision, Prague, Czech Republic, May 11-14, 2004. Proceedings, Part I 8, pp. 469–481, Springer, 2004.
[24] N. Kanopoulos, N. Vasanthavada, and R. L. Baker, “Design of an image edge detection filter using the sobel operator,” IEEE Journal of solid-state circuits, vol. 23, no. 2, pp. 358– 367, 1988.
[25] H. Wu, D. Zeng, Y. Hu, H. Shi, and T. Mei, “Dual spoof disentanglement generation for face anti-spoofing with depth uncertainty learning,” IEEE Transactions on Circuits and Systems for Video Technology, 2021.

[26] R. Volpi, H. Namkoong, O. Sener, J. C. Duchi, V. Murino, and S. Savarese, “Generalizing to unseen domains via adversarial data augmentation,” Advances in neural information processing systems, vol. 31, 2018.
[27] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “Autoaugment: Learn- ing augmentation strategies from data,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 113–123, 2019.
[28] X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in ICCV, 2017.
[29] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” ICLR, 2013.
[30] Z. Boulkenafet, J. Komulainen, L. Li, X. Feng, and A. Hadid, “Oulu-npu: A mobile face presentation attack database with real-world variations,” in 2017 12th IEEE international conference on automatic face & gesture recognition (FG 2017), pp. 612–618, IEEE, 2017.
[31] D. Wen, H. Han, and A. K. Jain, “Face spoof detection with image distortion analysis,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 4, pp. 746–761, 2015.
[32] Z. Zhang, J. Yan, S. Liu, Z. Lei, D. Yi, and S. Z. Li, “A face antispoofing database with diverse attacks,” in 2012 5th IAPR international conference on Biometrics (ICB), pp. 26– 31, IEEE, 2012.
[33] I. Chingovska, A. Anjos, and S. Marcel, “On the effectiveness of local binary patterns in face anti-spoofing,” in 2012 BIOSIG-proceedings of the international conference of biometrics special interest group (BIOSIG), pp. 1–7, IEEE, 2012.
[34] Y. Liu, A. Jourabloo, and X. Liu, “Learning deep models for face anti-spoofing: Binary or auxiliary supervision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 389–398, 2018.
[35] N. Erdogmus and S. Marcel, “Spoofing face recognition with 3d masks,” IEEE transac- tions on information forensics and security, vol. 9, no. 7, pp. 1084–1097, 2014.
[36] S.-Q. Liu, X. Lan, and P. C. Yuen, “Remote photoplethysmography correspondence fea- ture for 3d mask face presentation attack detection,” in Proceedings of the European Con- ference on Computer Vision (ECCV), pp. 558–573, 2018.
[37] Y. Jia, J. Zhang, S. Shan, and X. Chen, “Single-side domain generalization for face anti- spoofing,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8484–8493, 2020.
[38] R. Shao, X. Lan, and P. C. Yuen, “Regularized fine-grained meta face anti-spoofing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11974–11981, 2020.
[39] S. Liu, K.-Y. Zhang, T. Yao, M. Bi, S. Ding, J. Li, F. Huang, and L. Ma, “Adaptive nor- malized representation learning for generalizable face anti-spoofing,” in Proceedings of the 29th ACM International Conference on Multimedia, pp. 1469–1477, 2021.

[40] J. Wang, J. Zhang, Y. Bian, Y. Cai, C. Wang, and S. Pu, “Self-domain adaptation for face anti-spoofing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 2746–2754, 2021.
[41] S. Liu, P. C. Yuen, S. Zhang, and G. Zhao, “3d mask face anti-spoofing with remote pho- toplethysmography,” in European Conference on Computer Vision, pp. 85–100, Springer, 2016.
[42] A. Anjos and S. Marcel, “Counter-measures to photo attacks in face recognition: a public database and a baseline,” in 2011 international joint conference on Biometrics (IJCB), pp. 1–7, IEEE, 2011.
[43] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770– 778, 2016.
[44] I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in ICLR, 2018.
[45] L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.,” Journal of machine learning research, vol. 9, no. 11, 2008.
[46] C.-Y. Wang, Y.-D. Lu, S.-T. Yang, and S.-H. Lai, “Patchnet: A simple face anti-spoofing framework via fine-grained patch recognition,” in Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pp. 20281–20290, 2022.
[47] T. Kim, Y. Kim, I. Kim, and D. Kim, “Basn: Enriching feature representation using bi- partite auxiliary supervisions for face anti-spoofing,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, pp. 0–0, 2019.
[48] P.-K. Huang, J.-X. Chong, H.-Y. Ni, T.-H. Chen, and C.-T. Hsu, “Towards diverse live- ness feature representation and domain expansion for cross-domain face anti-spoofing,” in 2023 IEEE International Conference on Multimedia and Expo (ICME), IEEE, 2023.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *