帳號:guest(3.144.98.0)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):陳禹叡
作者(外文):Chen, Yu-Jui
論文名稱(中文):學習去概括: 用於領域均化的類感知對抗式學習
論文名稱(外文):Learning to Generalize: Class-Aware Adversarial Learning for Domain Generalization
指導教授(中文):許秋婷
指導教授(外文):Hsu, Chiou-Ting
口試委員(中文):簡仁宗
陳煥宗
口試委員(外文):Chien, Jen-Tzung
Chen, Hwann-Tzong
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:107062556
出版年(民國):109
畢業學年度:109
語文別:英文
論文頁數:30
中文關鍵詞:對抗式學習類感知元學習領域泛化
外文關鍵詞:adversarial learningclass-awaremeta-learningdomain generalization
相關次數:
  • 推薦推薦:0
  • 點閱點閱:864
  • 評分評分:*****
  • 下載下載:15
  • 收藏收藏:0
領域泛化的目標在學習一個多個源域中的通用特徵表示,並將其
泛化到任何看不見的目標域。在本文中,我們將重點放在圖像分類
應用的領域泛化上,並提出了一種具有兩個合作思想的新穎的領域
泛化架構。首先,為了不僅要最大程度地減少多個源域之間的域差
異,而且要增強在看不見的目標域上的類別可分辨性,我們採用著
名的對抗式學習並搭配新穎的源學習先驗約束。因為先驗約束是從
源域學習而來,因而充分學習到類別間的關係。其次,為了進一步
促進領域泛化能力,我們特別採用了元學習,並用元學習來模擬源
域和未知目標域之間的域轉換。我們首先將源數據劃分為虛擬訓練
和虛擬測試集,然後對合成的測試域執行樣式隨機化以擴大領域位
移。元學習訓練程序可確保我們的模型在虛擬訓練和測試域上均表
現良好,從而有助於將模型推廣到實際目標域。在 PACS 和 VLCS
數據集上的實驗結果表明,我們提出的方法明顯優於以前的域泛化
方法。
Domain generalization aims to learn a common feature representation
that generalizes to any unseen target domains by using data from multiple source domains. In this thesis, we focus on domain generalization for image classification application and propose a novel domain generalization framework with two cooperative ideas. Firstly, to minimize the domain discrepancy across multiple source domains as well as enhance class-discriminability on the unseen target domain, we resort to the prominent adversary learning with a novel source-learnt prior constraint. The prior constraint is learnt from the source data and therefore fully characterizes the between-class relationship. Secondly, to further facilitate the generalization capabilities, we resort to the notable meta-learning, which is used to simulate the domain shift between the source and the unknown target domain. We first divide the source data into a virtual training and virtual testing set; then we perform style randomization on the synthesized testing domain to enlarge the domain shift. The meta-learning training procedure ensures that our model performs well on both virtual training and testing domains and helps our model to generalize to the actual target domain. Experimental results on PACS and VLCS datasets show that our proposed method outperforms previous domain generalization approaches significantly.
Acknowledgements ii
摘要 iii
Abstract iv
1 Introduction 1
2 Related Work 5
2.1 Domain Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Adversarial Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Meta-learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Data Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Proposed Method 10
3.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 CAAL Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 CAAL-MLDA Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Experiments 18
4.1 Cross-domain Datasets and Settings . . . . . . . . . . . . . . . . . . . 18
4.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 Ablation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4 Comparison with Existing Methods . . . . . . . . . . . . . . . . . . . 23
4.4.1 Classification on PACS Dataset . . . . . . . . . . . . . . . . . 23
4.4.2 Classification on VLCS Dataset . . . . . . . . . . . . . . . . . 23
4.5 Visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5 Conclusion 27
References 28
[1] Y. Balaji, S. Sankaranarayanan, and R. Chellappa. Metareg: Towards domain
generalization using meta-regularization. In Advances in Neural Information Processing Systems (NIPS), 2018.
[2] F. M. Carlucci, A. D’Innocente, S. Bucci, B. Caputo, and T. Tommasi. Domain
generalization by solving jigsaw puzzles. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2019.
[3] Y. Chen, T. W. Lin, and C. T. Hsu. Towards a universal appearance for domain
generalization via adversarial learning. In Asian Conference of Pattern Recognition (ACPR), 2019.
[4] M. J. Choi, J. J. Lim, A. Torralba, and A. S. Willsky. Exploiting hierarchical
context on a large database of object categories. In 2010 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, pages 129–136, 2010.
[5] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and F. F. Li. Imagenet: A large-scale
hierarchical image database. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2009.
[6] Z. Ding and Y. Fu. Deep domain generalization with structured low rank constraint.
IEEE Transactions on image processing, 27(1), 2018.
[7] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell.
Decaf: A deep convolutional activation feature for generic visual recognition. In
Proceedings of the IEEE International Conference on Machine Learning (ICML),
2014.
[8] V. Dumoulin, J. Shlens, and M. Kudlur. A learned representation for artistic
style. In Proceedings of the International Conference on Learning Representation (ICLR), 2017.
[9] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The
pascal visual object classes (voc) challenge. International journal of computer
vision, 88(2):303–338, 2010.
[10] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on
Machine Learning (ICML), 2017.
[11] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette,
M. Marchand, and V. Lempitsky. Domain-adversarial training of neural networks.
Journal of Machine Learning Research, 17(59):1–35, 2016.
[12] G. Ghiasi, H. Lee, M. Kudlur, V. Dumoulin, and J. Shlens. Exploring the structure
of a real-time, arbitrary neural artistic stylization network. In Proceedings of the
British Machine Vision Conference (BMVC), 2017.
[13] M. Ghifary, W. B. Kleijn, M. Zhang, and D. Balduzzi. Domain generalization
for object recognition with multi-task autoencoders. In Proceedings of the IEEE
International Conference on Computer Vision (ICCV), 2015.
[14] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset, 2007.
[15] P. T. Jackson, A. Atapour-Abarghouei, S. Bonner, T. P. Breckon, and B. Obara.
Style augmentation: Data augmentation via style randomization. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,
pages 83–92, 2019.
[16] D. Li, Y. Yang, Y. Z. Song, and T. M. Hospedales. Learning to generalize: metalearning for domain generalization. In AAAI Conference on Artificial Intelligence,
2018.
[17] D. Li, Y. Yang, Y. Z. Song, and T.M. Hospedales. Deeper, broader and artier
domain generalization. In Proceedings of the IEEE International Conference on
Computer Vision (ICCV), 2017.
[18] D. Li, J. Zhang, Y. Yang, C. Liu, Y. Z. Song, and T.M. Hospedales. Episodic
training for domain generalization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019.
[19] H. Li, S. J. Pan, S. Wang, and A. C. Kot. Domain generalization with adversarial
feature learning. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2018.
[20] Y. Li, X. Tian, M. Gong, Y. Liu, T. Liu, K. Zhang, and D. Tao. Deep domain
generalization via conditional invariant adversarial networks. In Proceedings of
the European Conference on Computer Vision (ECCV), 2018.
[21] M. Mancini, S R. Bulo, B. Caputo, and E. Ricci. Best sources forward: domain
generalization through source-specific nets. In IEEE International Conference on
Image Processing (ICIP), 2018.
[22] T. Matsuura and T. Harada. Domain generalization using a mixture of multiple
latent domains. In AAAI Conference on Artificial Intelligence, 2020.
[23] K. Muandet, D. Balduzzi, and B. Schölkopf. Domain generalization via invariant
feature representation. In Proceedings of the IEEE International Conference on
Machine Learning (ICML), 2013.
[24] F. Qiao, L. Zhao, and X. Peng. Learning to learn single domain generalization. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 2020.
[25] M. M. Rahman, C. Fookes, M. Baktashmotlagh, and S. Sridharan. Multicomponent image translation for deep domain generalization. In IEEE Winter
Conference on Applications of Computer Vision (WACV), 2019.
[26] S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In
Proceedings of the International Conference on Learning Representation (ICLR),
2017.
[27] M. L. Rizzo and G. J. Székely. Energy distance. Wiley Interdisciplinary Reviews:
Computational Statistics, 8(1), 2016.
[28] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. Labelme: a database
and web-based tool for image annotation. International journal of computer vision,
77(1):157–173, 2008.
[29] S. Sankaranarayanan, Y. Balaji, C. D. Castillo, and R. Chellappa. Generate to
adapt: aligning domains using generative adversarial networks. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[30] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell. Adversarial discriminative domain adaptation, 2017. arXiv preprint arXiv:1702.05464.
[31] H. Wang, Z. He, Z. C. Lipton, and E. P. Xing. Learning robust representations by
projecting superficial statistics out. In Proceedings of the International Conference
on Learning Representation (ICLR), 2019.
[32] X. Wang, L. Li, W. Ye, M. Long, and J. Wang. Transferable attention for domain
adaptation. In AAAI Conference on Artificial Intelligence, 2019.
[33] K. Y. Wei and C. T. Hsu. Generative adversarial guided learning for domain adaptation. In Proceedings of the British Machine Vision Conference (BMVC), 2018.
[34] M. Xu, J. Zhang, B. Ni, T. Li, C. Wang, Q. Tian, and W. Zhang. Adversarial domain
adaptation with domain mixup. In AAAI Conference on Artificial Intelligence,
2020.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *