帳號:guest(3.144.172.9)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):艾庫山
作者(外文):Khurshed Ali
論文名稱(中文):利用學習方法解決社群網絡的競爭影響力最大化
論文名稱(外文):Learning-based Approaches to Tackle Competitive Influence Maximization on Social Networks
指導教授(中文):王志宇
陳宜欣
指導教授(外文):Wang, Chih-Yu
Chen, Yi-Shin
口試委員(中文):帥宏翰
李政德
韓永楷
鍾偉和
口試委員(外文):Shuai, Hong-Han
Li, Cheng-Te
Hon, Wing-Kai
Chung, Wei-Ho
學位類別:博士
校院名稱:國立清華大學
系所名稱:資訊系統與應用研究所
學號:103162868
出版年(民國):110
畢業學年度:109
語文別:英文
論文頁數:142
中文關鍵詞:影響力最大化競爭性影響力最大化社交網絡強化學習深度強化學習遷移學習
外文關鍵詞:Influence MaximizationCompetitive Influence MaximizationSocial NetworksReinforcement LearningDeep Reinforcement LearningTransfer Learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:386
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
企業以及社群使用者為了各種目的大量的使用社群網絡。使用者利用貼文,推特,部落格分享他們的生活或想法,也用於尋找他人推薦的景點,推薦的手機等等。相反的,企業利用社群網絡來推廣他們的產品,或是即將舉辦的活動等等。在使用者與企業如此大量使用社群網絡的情況之下,吸引了研究人員針對社群網絡進行研究並提出他們的看法。在社群網絡中最主要的研究主題是“影響力最大化”,而這個主題已經有許多相關的研究發表.

影響最大化(IM)難題是要找出在社交網絡中有關鍵影響力的使用者,這些使用者在社群中推播訊息並且吸引大量的使用者購買此產品。影響最大化(IM)難題又可以延生出更符合實際情況的難題,競爭影響最大化(CIM)難題,是由多方在社群網絡上推銷類似的產品,而都是為了追求整體的最大利潤。競爭影響最大化(CIM)難題已經利用延伸後的傳統IM模型,賽局理論與強化學習(RL)模型解決。然而,現有的模型是應用在假設相對單純的場景,其中使用者之間的關係與影響不會隨著時間改變,並且只在意使用者最後分到多少市場,並沒有針對推廣活動給予期限或其他限制。現有的研究沒有辦法解決社群網絡在時間上的限制與時序問題。除此之外,只要社交網絡有變化,現在CIM難題的RL模型就需要重新訓練,導致需要花大量的計算資源與計算時間在重新訓練社群網絡。除了RL模型在計算時間上的問題,因為RL模型透過狀態(state)以及動作(action)來累積知識,而在網絡變大時導致狀態與動作空間會增加到一個可觀的大小。此外,現有的RL模型假設模型可以看見完整的網絡拓墣,以便於模型訓練以及找到最佳策略。當社群網絡資料不是完全已知與可見之下,而我們需要透過搜索整個網絡去找出關鍵有影響力的使用者,是非常不切實際的方法.

為了解決以上提到的問題,我們提出了一個建立於強化學習的通用框架,稱為種子組合與種子選擇(SCSS),以解決受時間限制的競爭影響最大化對於社群網絡的影響。SCSS框架使用新的巢狀Q學習(NSQ)演算法來找到最佳策略,包括何時選擇有影響力的使用者以及如何選擇(考慮時間方面以及其他各方面的預算以及截止日期,同時也要考慮到競爭者的已知和未知的策略)。除此之外,我們利用RL中的遷移學習來解決競爭影響力最大化,從中促進RL模型延長在社群網絡新目標上的訓練時間。考慮到網絡的競爭與時間,我們提出的遷移學習在不同網絡上學習過去的知識,並將知識應用在新的目標網絡上找到最佳策略。而我們提出了一種基於深度強化學習(DRL)的模型,以克服網絡變大時RL模型狀態與動作空間(state-action space)增大。我們提出的模型利用社群網絡的社群結構來找到最佳的訊息傳播策略。最後,我們提供了一種基於DRL設計的模型,以解決未知社群網絡上競爭影響力最大化的問題。我們提出基於DRL設計的模型需要學習的策略,包括何時搜索網絡以及何時從所搜索的網絡中選擇有關鍵影響力的使用者。另外,為了提高DRL模型在未知社群網絡的訓練效率,我們在深度強化學習中採用了遷移學習,以解決未知社群網絡上競爭影響最大化難題.
Companies, as well as users, widely use social networks for various purposes. Users share their daily life activities or opinions through posts, tweets, blogs, asks for recommendations to visit places, buy mobile phones, and so on. In comparison, companies leverage social networks to promote their products, upcoming events, and more. Such massive adoption by users and companies has attracted researchers to conduct research and share new social network analysis insights. One of the prominent research topics on the social network analysis domain is the ``Influence Maximization'' that has attracted ample research publications.

The influence maximization (IM) problem is to identify the key influential users in a social network who can propagate the information in the social network with the hope that a large number of users will buy the product. Competitive Influence Maximization (CIM) is a more realistic and natural extension of the IM problem where multiple parties propagate similar products in the social network to maximize their respective profit. The CIM problem has been addressed by extending traditional IM-based models, leveraging game theory, and reinforcement learning-based (RL) models. However, existing models assume relatively simple scenarios that the user's relations and their influences remain constant with time and parties are only concerned with eventual market share with no deadline or restrictions imposed on their campaign. Existing studies do not address the time-constrained and temporal aspects of the social network. Besides, existing RL models for the CIM problem require to train from scratch even with a slight change in the social network, which results in substantial computational resources and training time to train the model from scratch on a network. In addition to RL models' training time concerns, RL models accumulate knowledge through (state, action) pairs resulting in a sizeable state-action space growth when the networks get large. Moreover, existing RL models assume that the complete network topology is visible for models to train and find the optimal strategy. That is impractical when the social network data is not entirely known/visible and requires exploring the network to find key influential users for information propagation.

To address the aforementioned issues, we propose a general reinforcement learning-based framework, termed as Seed-Combination and Seed-Selection (SCSS), to tackle the time-constrained competitive influence maximization on a social network in this dissertation. SCSS framework employs the novel nested Q-learning (NSQ) algorithm to find an optimal policy, consisting of when to select influential users and how to select, given the temporal aspects and parties budget and deadline constraints while competing against the competitor's known or unknown strategies. Further, we leverage the transfer learning methods in RL to tackle the competitive influence maximization to boost RL models' training time on new target social networks. Our proposed transfer learning methods leverage the previous knowledge learned on a different network to find an optimal policy on the new target network considering the competition and temporal aspects of the network. Besides, we propose a deep reinforcement learning (DRL) based model to overcome the state-action space growth of RL models when the network gets large. Our proposed model employs the community structure of the social network to find the best strategy for information propagation. Finally, we propose a DRL-based model to tackle competitive influence maximization on unknown social networks. Our proposed DRL-based model needs to learn a policy consisting of when to explore the network and when to select key influential users from the explored network. In addition, to boost the training efficiency of DRL models for unknown social networks, we employ transfer learning in deep reinforcement learning to address the competitive influence maximization problem on unknown social networks.
Acknowledgements . . . . . . . . . . . . . . . . . . iii
摘要. . . . . . . . . . . . . . . . . v
Abstract. . . . . . . . . . . . . . . . . . vii
1 Introduction. . . . . . . . . . . . . 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Dissertation Organization . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Literature Review. . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Influence Maximization on Social Networks . . . . . . . . . . . . . . . . 9
2.2.1 Time¬Constrained Influence Maximization . . . . . . . . . . . .
2.2.2 Influence Maximization on Unknown Social Networks . . . . . . 12
2.3 Competitive Influence Maximization in Social Networks . . . . . . . . . 13
2.3.1 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . 15
2.3.2 Transfer Learning in Reinforcement Learning . . . . . . . . . . . 17
2.3.3 Deep Reinforcement Learning . . . . . . . . . . . . . . . . . . . 18
3 Nested Q-Learning Method to Tackle Time-Constrained Competitive Influence Maximization. . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.1 Influence Maximization Model . . . . . . . . . . . . . . . . . . . 26
3.3.2 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . 28
3.3.3 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4.1 Seed-Combination and Seed-Selection Framework . . . . . . . . 31
3.4.2 Nested Q-learning (NSQ) for SCSS framework . . . . . . . . . . 35
3.4.3 SCSS Training Scenarios . . . . . . . . . . . . . . . . . . . . . . 39
3.4.4 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5.2 Comparison Methods . . . . . . . . . . . . . . . . . . . . . . . . 41
3.5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . 42
3.5.4 Training Performance . . . . . . . . . . . . . . . . . . . . . . . 49
3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4 Leveraging Transfer Learning in Reinforcement Learning to Tackle Competitive Influence Maximization . . . . . . . . . . . . . 51
4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.3.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.4.1 Transfer Learning in Reinforcement Learning . . . . . . . . . . . 58
4.4.2 Reinforcement Learning for Competitive Influence Maximization. . . . . . 60
4.4.3 Transfer Learning Integration for CIM . . . . . . . . . . . . . . . 64
4.4.4 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . 66
4.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.5.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . 71
4.5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5 Deep Reinforcement Learning-based Approach to Tackle Competitive Influence Maximization. . . . . . . . . . . . . 85
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5.3 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.4 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.4.1 Reinforcement Learning and Deep Q Networks . . . . . . . . . . 90
5.4.2 Community Detection . . . . . . . . . . . . . . . . . . . . . . . 91
5.5 Proposed Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.5.1 Framework Structure . . . . . . . . . . . . . . . . . . . . . . . . 93
5.5.2 Training Setup of DRIM . . . . . . . . . . . . . . . . . . . . . . 95
5.5.3 Deep Reinforcement Learning Framework and Value Function Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.5.4 Estimation Models of Combinatorial Strategy for Budget Allocation.. . . . . . . . . . . . . . 97
5.6 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.6.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.6.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . 101
5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6 Transfer Learning in Deep Reinforcement Learning to Tackle CIM on Unknown Social Networks. . . . . . . 105
6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.3 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.4.2 Proposed Framework . . . . . . . . . . . . . . . . . . . . . . . . 111
6.4.3 Transfer Learning in DRL for CIM . . . . . . . . . . . . . . . . 115
6.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.5.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . 119
6.5.3 Training Time Efficiency . . . . . . . . . . . . . . . . . . . . . . 122
6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7 Conclusion and Future Work 125
7.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Appendix Publications. . . . . . . . . . . . . . . . . . . . . . . . . .141

[1] E. Abbe and C. Sandon. Detection in the stochastic block model with multiple clusters: proof of the achievability conjectures, acyclic bp, and the informationcomputation gap. arXiv preprint arXiv:1512.09080, 2015.
[2] K. Ali, C.­Y. Wang, and Y.­S. Chen. Boosting reinforcement learning in competitive influence maximization with transfer learning. In 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI), pages 395–400. IEEE, 2018.
[3] K. Ali, C.­Y. Wang, and Y.­S. Chen. A novel nested q­learning method to tackle time­constrained competitive influence maximization. IEEE Access, 7:6337–6352, 2019.
[4] K. Ali, C.­Y. Wang, M.­Y. Yeh, and Y.­S. Chen. Addressing competitive influence maximization on unknown social network with deep reinforcement learning. In 2020 IEEE/ACM International Conference on on Advances in Social Networks Analysis and Mining (ASONAM), pages 196–203. IEEE, 2020.
[5] A. Barreto, D. Borsa, J. Quan, T. Schaul, D. Silver, M. Hessel, D. Mankowitz, A. Zidek, and R. Munos. Transfer in deep reinforcement learning using successor features and generalised policy improvement. In Proc. of PMLR ICML, pages 501–510, 2018.
[6] F. M. Bass. A new product growth model for consumer durables. manageniient sci. 15 215­227.. 1980. the relationship between diffusion rates, experience curves, and demand elasticities for consumer durable technological innovations. J. Bus,53:51–67, 1969.
[7] S. Ben­Ishay, A. Sela, and I. Ben­Gal.“ spread­it": A strategic game of competitive diffusion through social networks. IEEE Transactions on Games­Manuscript ID: TCIAIG, 2017(0046):R2, 2018.
[8] S. Bharathi, D. Kempe, and M. Salek. Competitive influence maximization in social networks. In International Workshop on Web and Internet Economics, pages 306–311. Springer, 2007.
[9] A. Borodin, Y. Filmus, and J. Oren. Threshold models for competitive influence in social networks. In International Workshop on Internet and Network Economics,pages 539–550. Springer, 2010.
[10] M. Brautbar and M. Kearns. Local algorithms for finding interesting individuals in large networks. In Innovations in Computer Science ­ ICS 2010, Tsinghua University, Beijing, China, January 5­7, pages 188–199, 2010.
[11] S. Brin and L. Page. The anatomy of a large­scale hypertextual web search engine. Computer networks and ISDN systems, 30(1­7):107–117, 1998.
[12] C. Budak, D. Agrawal, and A. El Abbadi. Limiting the spread of misinformation in social networks. In Proceedings of the 20th international conference on World wide web, pages 665–674. ACM, 2011.
[13] C.­W. Chang, M.­Y. Yeh, and K.­T. Chuang. On influence maximization to target users in the presence of multiple acceptances. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, pages 1592–1593. ACM, 2015.
[14] H.­H. Chen, Y.­B. Ciou, and S.­D. Lin. Information propagation game: a tool to acquire humanplaying data for multiplayer influence maximization on social networks. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1524–1527. ACM, 2012.
[15] W. Chen, A. Collins, R. Cummings, T. Ke, Z. Liu, D. Rincon, X. Sun, Y. Wang, W. Wei, and Y. Yuan. Influence maximization in social networks when negative opinions may emerge and propagate. In SDM, volume 11, pages 379–390. SIAM, 2011.
[16] W. Chen, W. Lu, and N. Zhang. Time­critical influence maximization in social networks with time­delayed diffusion process. In AAAI, volume 2012, pages 1–5,2012.
[17] W. Chen, Y. Wang, and S. Yang. Efficient influence maximization in social networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 199–208. ACM, 2009.
[18] W. Chen, Y. Yuan, and L. Zhang. Scalable influence maximization in social networks under the linear threshold model. In 2010 IEEE International Conference on Data Mining, pages 88–97. IEEE, 2010.
[19] Y.­C. Chen, W.­Y. Zhu, W.­C. Peng, W.­C. Lee, and S.­Y. Lee. Cim: Communitybased influence maximization in social networks. ACM Transactions on Intelligent Systems and Technology (TIST), 5(2):25, 2014.
[20] S. Cheng, H. Shen, J. Huang, G. Zhang, and X. Cheng. Staticgreedy: solving the scalability­accuracy dilemma in influence maximization. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management, pages 509–518. ACM, 2013.
[21] T.­Y. Chung, K. Ali, and C.­Y. Wang. Deep reinforcement learning­based approach to tackle competitive influence maximization. In Proc. of MLG workshop, 2019.
[22] A. Clark and R. Poovendran. Maximizing influence in competitive environments: a game­theoretic approach. In International Conference on Decision and Game Theory for Security, pages 151–162. Springer, 2011.
[23] E. Cohen, D. Delling, T. Pajor, and R. F. Werneck. Sketch­based influence maximization and computation: Scaling up with guarantees. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pages 629–638. ACM, 2014.
[24] P. Domingos and M. Richardson. Mining the network value of customers. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 57–66. ACM, 2001.
[25] N. Du, L. Song, M. G. Rodriguez, and H. Zha. Scalable influence estimation in continuous­time diffusion networks. In Advances in neural information processing systems, pages 3147–3155, 2013.
[26] Y. Du. Improving deep reinforcement learning via transfer. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pages 2405–2407. International Foundation for Autonomous Agents and Multiagent Systems, 2019.
[27] D. Ernst, P. Geurts, and L. Wehenkel. Tree­based batch mode reinforcement learning. Journal of Machine Learning Research, 6(Apr):503–556, 2005.
[28] S. Eshghi, S. Maghsudi, V. Restocchi, S. Stein, and L. Tassiulas. Efficient influence maximization under network uncertainty. In IEEE INFOCOM 2019­IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pages 365–371. IEEE, 2019.
[29] E. Even­Dar and Y. Mansour. Learning rates for q­learning. Journal of Machine Learning Research, 5(Dec):1–25, 2003.
[30] A. Fazeli and A. Jadbabaie. Game theoretic analysis of a strategic model of competitive contagion and product adoption in social networks. In Decision and Control (CDC), 2012 IEEE 51st Annual Conference on, pages 74–79. IEEE, 2012.
[31] F. Fernández and M. Veloso. Probabilistic policy reuse in a reinforcement learning agent. In Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, pages 720–727. ACM, 2006.
[32] S. Galhotra, A. Arora, and S. Roy. Holistic influence maximization: Combining scalability and efficiency with opinion­aware models. In Proceedings of the 2016 International Conference on Management of Data, pages 743–758. ACM, 2016.
[33] S. Galhotra, A. Arora, S. Virinchi, and S. Roy. Asim: A scalable algorithm for influence maximization under the independent cascade model. In Proceedings of the 24th International Conference on World Wide Web, pages 35–36. ACM, 2015.
[34] J. Goldenberg, B. Libai, and E. Muller. Talk of the network: A complex systems look at the underlying process of word­of­mouth. Marketing letters, 12(3):211–223, 2001.
[35] J. Goldenberg, B. Libai, and E. Muller. Using complex systems analysis to advance marketing theory development: Modeling heterogeneity effects on new product growth through stochastic cellular automata. Academy of Marketing Science Review, 9(3):1–18, 2001.
[36] A. Goyal, F. Bonchi, and L. V. Lakshmanan. Learning influence probabilities in social networks. In Proceedings of the third ACM international conference on Web search and data mining, pages 241–250. ACM, 2010.
[37] A. Goyal, W. Lu, and L. V. Lakshmanan. Celf++: optimizing the greedy algorithm for influence maximization in social networks. In Proceedings of the 20th international conference companion on World wide web, pages 47–48. ACM, 2011.
[38] A. Goyal, W. Lu, and L. V. Lakshmanan. Simpath: An efficient algorithm for influence maximization under the linear threshold model. In Data Mining (ICDM),2011 IEEE 11th International Conference on, pages 211–220. IEEE, 2011.
[39] S. Goyal, H. Heidari, and M. Kearns. Competitive contagion in networks. Games and Economic Behavior, 2014.
[40] M. Granovetter. Threshold models of collective behavior. American journal of sociology, 83(6):1420–1443, 1978.
[41] X. He, G. Song, W. Chen, and Q. Jiang. Influence blocking maximization in social networks under the competitive linear threshold model. In SDM, pages 463–474.SIAM, 2012.
[42] J. Huang, H. Sun, J. Han, H. Deng, Y. Sun, and Y. Liu. Shrink: a structural clustering algorithm for detecting hierarchical communities in networks. In Proceedings of the 19th ACM international conference on Information and knowledge management,pages 219–228. ACM, 2010.
[43] J. L. Iribarren and E. Moro. Impact of human activity patterns on the dynamics of information diffusion. Physical review letters, 103(3):038702, 2009.
[44] K. Jung, W. Heo, and W. Chen. Irie: Scalable and robust influence maximization in social networks. In Data Mining (ICDM), 2012 IEEE 12th International Conference on, pages 918–923. IEEE, 2012.
[45] B. Karrer and M. E. Newman. Stochastic blockmodels and community structure in networks. Physical review E, 83(1):016107, 2011.
[46] M. Karsai, M. Kivelä, R. K. Pan, K. Kaski, J. Kertész, A.­L. Barabási, and J. Saramäki. Small but slow world: How network topology and burstiness slow down spreading. Physical Review E, 83(2):025102, 2011.
[47] D. Kempe, J. Kleinberg, and É. Tardos. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 137–146. ACM, 2003.
[48] J. Kostka, Y. A. Oswald, and R. Wattenhofer. Word of mouth: Rumor dissemination in social networks. In International Colloquium on Structural Information and Communication Complexity, pages 185–196. Springer, 2008.
[49] F. Krzakala, C. Moore, E. Mossel, J. Neeman, A. Sly, L. Zdeborová, and P. Zhang. Spectral redemption in clustering sparse networks. Proceedings of the National Academy of Sciences, 110(52):20935–20940, 2013.
[50] A. Lancichinetti, S. Fortunato, and J. Kertész. Detecting the overlapping and hierarchical community structure in complex networks. New Journal of Physics, 11(3):033015, 2009.
[51] J. F. Lawless. Statistical models and methods for lifetime data, volume 362. John Wiley & Sons, 2011.
[52] A. Lazaric, M. Restelli, and A. Bonarini. Transfer of samples in batch reinforcement learning. In Proceedings of the 25th international conference on Machine learning, pages 544–551. ACM, 2008.
[53] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost­effective outbreak detection in networks. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 420–429. ACM, 2007.
[54] J. Leskovec and A. Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014.
[55] H. Li, M. Xu, S. S. Bhowmick, C. Sun, Z. Jiang, and J. Cui. Disco: Influence maximization meets network embedding and deep learning. arXiv preprint arXiv:1906.07378, 2019.
[56] L.­J. Lin. Self­improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3­4):293–321, 1992.
[57] S.­C. Lin, S.­D. Lin, and M.­S. Chen. A learning­based framework to handle multiround multi­party influence maximization on social networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 695–704. ACM, 2015.
[58] I. Litou, I. Boutsis, and V. Kalogeraki. Using location­based social networks for time­constrained information dissemination. In 2014 IEEE 15th International Conference on Mobile Data Management, volume 1, pages 162–171. IEEE, 2014.
[59] B. Liu, G. Cong, D. Xu, and Y. Zeng. Time constrained influence maximization in social networks. In 2012 IEEE 12th International Conference on Data Mining, pages 439–448. IEEE, 2012.
[60] B. Liu, G. Cong, Y. Zeng, D. Xu, and Y. M. Chee. Influence spreading path and its application to the time constrained social influence maximization problem and beyond. IEEE Transactions on Knowledge and Data Engineering, 26(8):1904–1917, 2014.
[61] Y. Liu, Q. Liu, H. Zhao, Z. Pan, and C. Liu. Adaptive quantitative trading: an imitative deep reinforcement learning approach. In AAAI, 2020.
[62] Y. Liu, B. Logan, N. Liu, Z. Xu, J. Tang, and Y. Wang. Deep reinforcement learning for dynamic treatment regimes on medical registry data. In 2017 IEEE International Conference on Healthcare Informatics (ICHI), pages 380–385. IEEE, 2017.
[63] M. G. Madden and T. Howley. Transfer of experience between reinforcement learning environments with progressive difficulty. Artificial Intelligence Review, 21(3­4):375–398, 2004.
[64] V. Mahajan, E. Muller, and F. M. Bass. New product diffusion models in marketing: A review and directions for research. In Diffusion of technologies and social behavior, pages 125–177. Springer, 1991.
[65] S. Mihara, S. Tsugawa, and H. Ohsaki. Influence maximization problem for unknown social networks. In Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015, pages 1539–1546. ACM, 2015.
[66] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
[67] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human­level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
[68] A. Mohammadi, M. Saraee, and A. Mirzaei. Time­sensitive influence maximization in social networks. Journal of Information Science, 41(6):765–778, 2015.
[69] S. A. Myers and J. Leskovec. Clash of the contagions: Cooperation and competition in information diffusion. In Data Mining (ICDM), 2012 IEEE 12th International Conference on, pages 539–548. IEEE, 2012.
[70] R. R. Nadakuditi and M. E. Newman. Graph spectra and the detectability of community structure in networks. Physical review letters, 108(18):188701, 2012.
[71] M. E. Newman and M. Girvan. Finding and evaluating community structure in networks. Physical review E, 69(2):026113, 2004.
[72] H. T. Nguyen, M. T. Thai, and T. N. Dinh. Stop­and­stare: Optimal sampling algorithms for viral marketing in billion­scale networks. In Proceedings of the 2016 International Conference on Management of Data, pages 695–710. ACM, 2016.
[73] N. Ohsaka, T. Akiba, Y. Yoshida, and K.­i. Kawarabayashi. Fast and accurate influence maximization on large networks with pruned monte­carlo simulations. In AAAI, pages 138–144, 2014.
[74] N. Ohsaka, Y. Yamaguchi, N. Kakimura, and K.­i. Kawarabayashi. Maximizing time­decaying influence in social networks. In ECML/PKDD (1), pages 132–147,2016.
[75] D. Ormoneit and Ś. Sen. Kernel­based reinforcement learning. Machine learning, 49(2­3):161–178, 2002.
[76] J. Qiu, J. Tang, H. Ma, Y. Dong, K. Wang, and J. Tang. Deepinf: Social influence prediction with deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2110–2119.ACM, 2018.
[77] A. Raghu, M. Komorowski, I. Ahmed, L. Celi, P. Szolovits, and M. Ghassemi. Deep reinforcement learning for sepsis treatment. arXiv preprint arXiv:1711.09602, 2017.
[78] M. Richardson and P. Domingos. Mining knowledge­sharing sites for viral marketing. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 61–70. ACM, 2002.
[79] M. Riedmiller. Neural fitted q iteration–first experiences with a data efficient neural reinforcement learning method. In European Conference on Machine Learning, pages 317–328. Springer, 2005.
[80] M. G. Rodriguez, D. Balduzzi, and B. Schölkopf. Uncovering the temporal dynamics of diffusion networks. arXiv preprint arXiv:1105.0697, 2011.
[81] T. C. Schelling. Micromotives and macrobehavior. WW Norton & Company, 2006.
[82] J. Shang, S. Zhou, X. Li, L. Liu, and H. Wu. Cofim: A community­based framework for influence maximization on large­scale networks. Knowledge­Based Systems, 117:88–100, 2017.
[83] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
[84] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press Cambridge, 1998.
[85] F. Tanaka and M. Yamamura. Multitask reinforcement learning on the distribution of mdps. In Computational Intelligence in Robotics and Automation, 2003. Proceedings. 2003 IEEE International Symposium on, volume 3, pages 1108–1113. IEEE, 2003.
[86] Y. Tang, Y. Shi, and X. Xiao. Influence maximization in near­linear time: A martingale approach. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, pages 1539–1554. ACM, 2015.
[87] Y. Tang, X. Xiao, and Y. Shi. Influence maximization: Near­optimal time complexity meets practical efficiency. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 75–86. ACM, 2014.
[88] M. Taylor, S. Whiteson, and P. Stone. Transfer learning for policy search methods. In ICML workshop on Structural Knowledge Transfer for Machine Learning, pages 1–4. Citeseer, 2006.
[89] M. E. Taylor, P. Stone, and Y. Liu. Value functions for rl­based behavior transfer: A comparative study. In AAAI, volume 5, pages 880–885, 2005.
[90] L. Torrey and J. Shavlik. Transfer learning. Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques, 1:242, 2009.
[91] L. Torrey, J. Shavlik, S. Natarajan, P. Kuppili, and T. Walker. Transfer in reinforcement learning via markov logic networks. In AAAI Workshop on Transfer Learning for Complex Tasks, 2008.
[92] L. Torrey, J. Shavlik, T. Walker, and R. Maclin. Relational macros for transfer in reinforcement learning. In International Conference on Inductive Logic Programming, pages 254–268. Springer, 2007.
[93] K. A. Tzu­Yuan Chung and C.­Y. Wang. Deep reinforcement learning­based approach to tackle competitive influence maximization. In Proceedings of the 15th International Workshop on Mining and Learning with Graphs (MLG), 2019.
[94] C. Wang, W. Chen, and Y. Wang. Scalable influence maximization for independent cascade model in large­scale social networks. Data Mining and Knowledge Discovery, 25(3):545–576, 2012.
[95] S. Wasserman and K. Faust. Social network analysis: Methods and applications, volume 8. Cambridge university press, 1994.
[96] C. J. Watkins and P. Dayan. Q­learning. Machine learning, 8(3­4):279–292, 1992.
[97] B. Wilder, N. Immorlica, E. Rice, and M. Tambe. Maximizing influence in an unknown social network. In Thirty­Second AAAI Conference on Artificial Intelligence, 2018.
[98] B. Wilder, L. Onasch­Vera, J. Hudson, J. Luna, N. Wilson, R. Petering, D. Woo, M. Tambe, and E. Rice. End­to­end influence maximization in the field. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, pages 1414–1422. International Foundation for Autonomous Agents and Multiagent Systems, 2018.
[99] A. Wilson, A. Fern, S. Ray, and P. Tadepalli. Multi­task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, pages 1015–1022. ACM, 2007.
[100] B. Yan, K. Song, J. Liu, F. Meng, Y. Liu, and H. Su. On the maximization of influence over an unknown social network. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pages 2279–2281. International Foundation for Autonomous Agents and Multiagent Systems, 2019.
[101] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320–3328, 2014.
[102] J. Zhao, Q. Liu, L. Wang, and X. Wang. Competitive seeds­selection in complex networks. Physica A: Statistical Mechanics and its Applications, 467:240–248,2017.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *