帳號:guest(3.144.252.224)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):張震宇
作者(外文):Jhang, Jhen-Yu
論文名稱(中文):逐類別基於對抗式學習之半監督式學習
論文名稱(外文):Class-wise GAN-based Semi-Supervised Learning
指導教授(中文):林嘉文
指導教授(外文):Lin, Chia-Wen
口試委員(中文):鄭旭詠
康立威
黃敬群
口試委員(外文):Cheng, Hsu-Yung
Kang, Li-Wei
Huang, Ching-Chun
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:105061516
出版年(民國):108
畢業學年度:107
語文別:英文
論文頁數:32
中文關鍵詞:半監督學習產生器自動編碼器對抗式網路
外文關鍵詞:semi-supervised learninggenerative modelsauto-encodersgenerative adversarial networks
相關次數:
  • 推薦推薦:0
  • 點閱點閱:292
  • 評分評分:*****
  • 下載下載:17
  • 收藏收藏:0
隨著深度學習的發展,多數成果運用監督式學習從完整標示的資訊中提取特徵的方式已趨於成熟,已能運用在多個方面的資料探勘。由於資源限制,運用有限的具標籤資料以訓練深度模型已成為一個挑戰。在可取得的巨標籤資料被限制的情況下,半監督式學習透過大量無標籤資料提升模型品質是廣為人知的手段。
生產對抗式網路(GANs)很擅長生產圖片,而在近幾年的發展之下,能夠用來生產圖片。對抗式訓練已被證實能增進在半監督式學習的分類器效能。然而,現今已有的方法中,由於生產器的訓練較為緩慢,是故不能將他類方法加入訓練。而且,由於生產器的輸入為雜訊,故生產的結果是完全不可控的。結果上,能放入幫助分類器訓練的圖片基本上是雜訊圖片,能提升分類器效能十分有限。
在這篇論文中,我們提出了逐類別基於對抗式訓練半監督式學習的演算法。首先,為了增加其延展性,我們調整GANs的架構;再者,為了增進生產圖片的品質,我們將用逐類別的生產器。實驗結果顯示我們相較於過去方法的優勢。
Recently, deep learning has been great developed. Most of success based on supervised learning methods which need amounts of labeled data. It has been a challenge to train models with limited labeled data. Semi-supervised learning is a well-known technique to boost the model performance through lots of unlabeled data while the amount of available labeled data is limited.
Generative adversarial networks (GANs) are good to generate images, and the advance of GANs makes the results more realistic. The adversarial-training-structure has been proven to improve the performance of the classifier in semi-supervised learning. However, the existing methods could not take advantages of other kind of techniques, because of the too slow step of training of the generator. Moreover, the outputs of generator are totally uncontrollable, because the inputs of generator are random vectors. As the result, what the classifier got from the generator is the noise image, and the improvement of performance is limited.
In our method, we propose a class-wise GAN-based algorithm for semi-supervised learning. First, to make it extendable in semi-supervised learning, we adjust the GANs architecture. Second, in order to improve the quality of the generated images, we use class-wise generator. It is shown in the experiments that the advantage of our proposed method compared to previous works.
摘 要 ii
Abstract iii
Content iv
Chapter 1 Introduction 6
Chapter 2 Related Work 8
2.1 Teacher-student method 8
2.2 GAN-based method 8
Chapter 3 Proposed method 10
3.1 Overview 10
3.2 Weight-shared Encoder 11
3.3 Class-wise Decoders 11
3.4 Extended Techniques 12
3.5 Loss Functions 13
Chapter 4 Experiments and Discussion 18
4.1 Datasets 18
4.2 Implementation Detail 19
4.3 Performance Evaluation 20
4.4 Cifar10 Stability 20
4.5 Visualization 21
Chapter 5 Conclusion 30
References 31
[1] A. Coates, A. Ng, & H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” in Proceedings of the fourteenth international conference on artificial intelligence and statistics, 2011, pp. 215-223.
[2] A. Kendall & Y. Gal, “What uncertainties do we need in bayesian deep learning for computer vision?” in Advances in neural information processing systems, 2017, pp. 5574-5584.
[3] A. Krizhevsky, V. Nair, & G. Hinton, “The CIFAR-10 dataset,” at online: http://www.cs.toronto.edu/kriz/cifar.html, 55, 2014.
[4] A. Tarvainen & H, Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” in Advances in neural information processing systems, 2017, pp. 1195-1204.
[5] G. J. Qi, L. Zhang, H. Hu, M. Edraki, J. Wang, & X. S. Hua, “Global versus localized generative adversarial nets,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.1517-1525.
[6] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, & L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition, 2009, pp. 248-255.
[7] K. He, X. Zhang, S. Ren, & J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[8] S. A. Nene, S. K. Nayar, & H. Murase, “Columbia object image library (coil-20),”in 1996.
[9] S. Laine & T. Aila, “Temporal ensembling for semi-supervised learning,” in arXiv:1610.02242, 2016.
[10] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” in Advances in Neural Information Processing Systems, 2016, pp. 2234–2242.
[11] Y. Luo, J. Zhu, M. Li, Y. Ren, & B. Zhang, "Smooth neighbors on teacher graphs for semi-supervised learning" in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8896-8905.
[12] Z. Dai, Z. Yang, F. Yang, W. W. Cohen, & R. R. Salakhutdinov, “Good semi-supervised learning that requires a bad gan,” in Advances in Neural Information Processing Systems, 2017, pp. 6510-6520.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *