帳號:guest(18.116.15.42)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):金藜軒
作者(外文):Chin, Li-Hsuan
論文名稱(中文):利用同理心策略提升觀點洞察與人際溝通
論文名稱(外文):PRICELESS: Perspective Retrieval and Interpersonal Communication Enhancement Leveraging Empathy Strategies
指導教授(中文):陳宜欣
指導教授(外文):Chen, Yi-Shin
口試委員(中文):彭文志
高宏宇
口試委員(外文):Peng, Wen-Chih
Kao, Hung-Yu
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊系統與應用研究所
學號:109065701
出版年(民國):112
畢業學年度:112
語文別:英文
論文頁數:43
中文關鍵詞:自然語言處理同理心知識圖譜
外文關鍵詞:NLPEmpathyKnowledge Graph
相關次數:
  • 推薦推薦:0
  • 點閱點閱:119
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
同理心是人際互動的核心,對於理解和引導人與人之間的和平交流至關重要。本研究為現有的大型語言模型中,常見的同理心缺口搭建了一座橋梁。透過深入挖掘人類情感和觀點,本研究提出了一種富有同情心的知識圖譜。這一進步增強了像 ChatGPT 這樣的 Large Language Model 的專業能力,使它們能夠更好地預測人們接受他人話語時所產生的多樣化觀點。知識圖譜為不同的人際關係和語境線索提供了詳細的映射,作為一個協助人們溝通的工具,有助於培育更具同理心的機器回應。全面的定量和定性評估證明了此方法的重要性,凸顯了其增強人際交流品質的潛力。
Empathy, central to human interactions, is pivotal for understanding and navigating interpersonal communications. This research bridges the current empathy gap in Large Language Models (LLMs) by introducing an sophisticated method. By tapping into the depth of human emotions and perspectives, the study introduces an Empathy-enriched Knowledge Graph. This advancement augments the proficiency of LLMs like ChatGPT, equipping them to better align with the diverse perspectives tied to quotations. The knowledge graph provides a detailed mapping of relationship dynamics and contextual cues, serving as a tool for fostering more empathetic machine responses. Comprehensive quantitative and qualitative evaluations underscore the significance of this approach, highlighting its potential to amplify the quality of human interactions.
Chapter 1: Introduction - Page 1
Chapter 2: Related Work - Page 5
Chapter 3: Methodology - Page 7
Chapter 4: Experiment & Results - Page 21
Chapter 5: Conclusions and Future Work - Page 34
Appendices - Page 36

[1] Yiming Cui, Wanxiang Che, Shijin Wang, and Ting Liu. Lert: A linguistically- motivated pre-trained language model, 2022.
[2] Matthew Honnibal and Ines Montani. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To ap- pear, 2017.
[3] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77, 2020.
[4] Allison Lahnala, Charles Welch, David Jurgens, and Lucie Flek. A critical reflection and forward perspective on empathy and natural language processing. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2139–2158, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics.
[5] Qian Liu, Yihong Chen, Bei Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, and Dongmei Zhang. You impress me: Dialogue generation via mutual persona percep- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1417–1427, Online, July 2020. Association for Computational Lin- guistics.
[6] Leland McInnes, John Healy, and Steve Astels. hdbscan: Hierarchical density based clustering.
[7] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023.
[8] Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. Towards empathetic open-domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 5370–5381, Florence, Italy, July 2019. Association for Computational Linguistics.
[9] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. CoRR, abs/1908.10084, 2019.
[10] Mandana Saebi, Ernest Pusateri, Aaksha Meghawat, and Christophe Van Gysel. A discriminative entity-aware language model for virtual assistants. arXiv preprint arXiv:2106.11292, 2021.
[11] Robyn Speer, Joshua Chin, and Catherine Havasi. Conceptnet 5.5: An open multilin- gual graph of general knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1), Feb. 2017.
[12] Xiao Wang, Nian Liu, Hui Han, and Chuan Shi. Self-supervised heterogeneous graph neural network with co-contrastive learning. CoRR, abs/2105.09111, 2021.
[13] Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, and Dilek Hakkani-Tur. Think before you speak: Explicitly generating implicit commonsense knowledge for response generation. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1237–1252, Dublin, Ireland, May 2022. Association for Computational Linguistics.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *