帳號:guest(18.188.226.93)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):鄭 謙
作者(外文):Jeng, Cheng
論文名稱(中文):運用矩陣因子分解與對偶關聯網絡之主動式學習
論文名稱(外文):Active Learning Using Dual Association Networks and Matrix Factorization
指導教授(中文):蘇豐文
指導教授(外文):Soo, Von-Wun
口試委員(中文):陳朝欽
陳宜欣
口試委員(外文):Chen, Chaur-Chin
Chen, Yi-Shin
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系所
學號:104062585
出版年(民國):107
畢業學年度:106
語文別:英文
論文頁數:43
中文關鍵詞:主動式學習關聯性學習關聯網絡矩陣因子分解
外文關鍵詞:Active LearningAssociation LearningAssociation NetworkNon-negative Matrix Factorization
相關次數:
  • 推薦推薦:0
  • 點閱點閱:177
  • 評分評分:*****
  • 下載下載:9
  • 收藏收藏:0
人工智慧發展至今,距離模擬人類思考仍有一段距離,而這當中一大障礙是常識推理。人類透過生活經驗建立物品或概念間連結的能力是機器學習至今無法達成的。關聯性學習試圖模擬這個被視為常識推理基本能力之一過程。然而及使有了模型,此一領域的訓練資料往往需要手動標註,使得資料收集相當耗費資源。此篇論文主要研究標的是透過主動式學習,更有效率的運用這些資訊,減少關聯性網絡模型訓練所需的資料量。
我們提出的方法以基於非負矩陣分解的關聯網絡為基礎,將原先學單純習正向關聯性的模型修改為正負對偶的雙關聯網絡。此一改變使模型能夠區分原先視為相同的「未知」與「確定無關聯(負向關聯)」。這不僅對於單純的關聯性學習有幫助,更能在主動式學習階段大幅增加由新增資料中獲得的資訊量。
實驗數據顯示,修改過的方法在關聯性學習上相對於原先的模型,相同召回率下,準確率有2% 的進步。主動式學習能再將這個差距擴大。在測試的四種主動式學習策略:隨機(無主動式學習,作為對照組)、衝突、不確定性及複合(衝突與不確定性共同考慮)當中,我們發現提出的三種策略表現都比對照組好,複合式的策略在兩個領域中的準確度更能達到比隨機策略高出一倍的成長。
Artificial intelligence has come a long way, but how close are we actually to creating a machine that “thinks” like a human, acts like a human? Common sense reasoning is one of the large obstacles that prevent this from happening: it is difficult for the computer to establish similar associations that the human mind intuitively and implicitly connects. Association learning establishes a model that simulates the acquisition of association from examples, which is often believed to be one of the fundamental abilities for humans to possess common sense. However, “labelled” association data is still scarce, and it usually has to be labelled manually, making it very costly. Our research aims are to look into the active learning in the common sense association reasoning that poses issues on how to limit data needed to train an association network model effectively.
We propose a dual network with positive and negative associations, based on a bipartite NMF association model. In this model, we extend the previously all-positive network model to include a negative network, which makes it possible to distinguish between an unknown association pair and a pair that is verified to have no association. This is crucial to an effective active learning, since it allows to extract more data out of the answers provided by consulting an oracle.
We show that the extended model creates 2% improvement over the previous bipartite model before applying active learning, using the same evaluation methods as the previous model. This improvement can be further bolstered using active learning. We tested 4 selection strategies for active learning, including random (as baseline), conflict, uncertainty, and hybrid, and found that either of the three proposed active learning strategies scored a better precision increase than the baseline, with the hybrid one achieving up to twice the improvement in precision compared to random selection.
摘要 -I
Abstract -II
Table of Contents -III
List of Figures -V
List of Tables -VI
1 Introduction -1
2 Methodology -4
2.1 Framework -4
2.2 Objectives -5
2.2.1 Association Learning Objectives -5
2.2.2 Active Learning Objectives -7
2.3 Methods -8
2.3.1 Association Learning Methods -8
2.3.2 Active Learning Methods -9
3 Experiments -15
3.1 Data Set -15
3.2 Association Learning Experiments -17
3.2.1 θ Value -17
3.2.2 Evaluation against previous method -18
3.3 Active Learning Experiments -19
3.3.1 Selection Strategy -19
3.3.2 Sampling Size -22
3.4 Goal-Action Domain -24
4 Discussion, Conclusion & Future work -26
4.1 Discussion -26
4.2 Conclusion -27
4.3 Future Work -28
References -29
Appendix A Example of positive goal-action predictions-a
1. Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth, From Data Mining to Knowledge Discovery in Databases, AI Magazine, Vol. 17, No. 3, 1-34 (1996)
2. R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng, Self-taught Learning: Transfer Learning from Unlabeled Data, ICML '07 Proceedings of the 24th international conference on Machine learning, 759-766 (2007)
3. B. Settles, Computer Sciences Technical Report 1648. University of Wisconsin–Madison (2010)
4. C. Monteleoni, Learning with Online Constraints: Shifting Concepts and Active Learning. PhD thesis, Massachusetts Institute of Technology (2006)
5. S. A. Sloman, The empirical case for two systems of reasoning, Psychological Bulletin, Vol. 119, No. 1, 3-22 (1996)
6. T. C. Chen, Feature selection and common sense association learning based on non-negative matrix factorization, Master’s thesis, National Tsing Hua University (2016)
7. D. Lewis and W. Gale, A sequential algorithm for training text classifiers, Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, 3–12 (1994)
8. H.S. Seung, M. Opper, and H. Sompolinsky, Query by committee, Proceedings of the ACM Workshop on Computational Learning Theory, 287–294 (1992)
9. C.E. Shannon, A mathematical theory of communication, Bell System Technical Journal, 27:379–423,623–656 (1948)
10. T. M. Mitchell, Generalization as search, Artificial Intelligence Volume 18 Issue 2, 203-226 (1982)
11. ConceptNet, http://conceptnet.io/
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *