帳號:guest(3.15.22.24)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):李瑋芹
作者(外文):Lee, Wei-Chin
論文名稱(中文):Improve&Explain:運用生成式AI提升作文等級及提供反饋
論文名稱(外文):Improve&Explain: Leveling Up Essays and Providing Informative Feedback Based on Generative AI
指導教授(中文):張俊盛
蕭若綺
指導教授(外文):Chang, Jason S.
Hsiao, Jo-Chi
口試委員(中文):張智星
鍾曉芳
口試委員(外文):Jang, Jyh-Shing
Chung, Siaw-Fong
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊系統與應用研究所
學號:111065522
出版年(民國):113
畢業學年度:112
語文別:英文
論文頁數:54
中文關鍵詞:英文作文改善回饋建議生成大型語言模型檢索增強生成
外文關鍵詞:Essay Level UpFeedback Comment GenerationLarge Language ModelRetrieval Augmented Generation
相關次數:
  • 推薦推薦:0
  • 點閱點閱:31
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
本論文提出一個利用生成式AI改進學生作文及提供回饋意見的方法。在我們的研究中,我們分析使用者輸入的作文其搭配詞及潛在的誤用搭配詞。此方法涉及識別搭配詞和誤用搭配詞,檢索適合學生語言能力的相關搭配詞,並利用強大的大型語言模型(LLM)和檢索增強生成(RAG)產生改進的論文及反饋意見。我們提出了一個雛形寫作工具「Improve&Explain」,此系統應用字典中,不同等級的搭配詞來獲得更符合學生語言程度地改進論文和回饋建議。對英語學習者寫的作文進行的評估顯示,我們的系統比起現有的寫作工具,獲得較佳的結果。
We introduce a method for generating the improved essay and informative feedback comments for a given essay appropriate for a target level. In our approach, the collocations and potential miscollocations of the given essay are identified. The method involves identifying collocations and miscollocations, retrieving relevant collocations for the target level, and generating improved essays with feedback comments using a robust large language model (LLM) and retrievals for retrieval augmented generation (RAG). We present a prototype writing tool, Improve&Explain, that applies the graded collocations in a dictionary to obtain the improved essay and feedback comment table. Evaluation on essays written by English learners shows that the method significantly outperforms the existing writing tools.
Abstract i
摘要 ii
致謝 iii
Contents iv
List of Figures vi
List of Tables vii
1 Introduction 1
2 Related Work 5
3 Methodology 8
3.1 Problem Statement 9
3.2 Generate Level-up Essays 10
3.3 Generate Feedback Comment Table 15
3.4 Run-Time Essay Improving and Explaining 17
4 Experiment 19
4.1 Datasets and Toolkits 19
4.2 Databases 21
4.3 GEC Prompts 23
4.4 Word Level Up Prompts 23
4.5 Feedback Comment Prompts 24
4.6 Evaluation Method 25
5 Evaluation Results 30
5.1 Results from the Grammarly Evaluation 30
5.2 Results from the Human Evaluation 32
6 Conclusion and Future Work 35
Appendix 37
Reference 51
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.

Shabnam Behzad, Amir Zeldes, and Nathan Schneider. Sentence-level feedback generation for english language learners: Does data augmentation help? arXiv preprint arXiv:2212.08999, 2022.

Andrey Bout, Alexander Podolskiy, Sergey Nikolenko, and Irina Piontkovskaya. Efficient grammatical error correction via multi-task training and optimized training schedule. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5800–5816, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.355. URL https://aclanthology.org/2023.emnlp-main.355.

Steven Coyne, Diana Galvan-Sosa, Keisuke Sakaguchi, and Kentaro Inui. Developing a typology for language learning feedback. In Proceedings of the 29th Annual Conference of the Association for Natural Language Processing, Okinawa, Japan, 2023

Elozino Egonmwan and Yllias Chali. Transformer and seq2seq model for paraphrase generation. In Alexandra Birch, Andrew Finch, Hiroaki Hayashi, Ioannis Konstas, Thang Luong, Graham Neubig, Yusuke Oda, and Katsuhito Sudoh, editors, Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 249–255, Hong Kong, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5627. URL https://aclanthology.org/D19-5627.

John Hattie and Helen Timperley. The power of feedback. Review of educational research, 77(1):81–112, 2007.

Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd.
spaCy: Industrial-strength Natural Language Processing in Python, 2020. URL https://doi.org/10.5281/zenodo.1212303.

Gautier Izacard and Edouard Grave. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282, 2020.

Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir
Karpukhin, Naman Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020.

Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu. A survey on
retrieval-augmented text generation. arXiv preprint arXiv:2202.01110, 2022.

Dekang Lin. Dependency-Based Evaluation of Minipar, pages 317–329. Springer Netherlands, Dordrecht, 2003. ISBN 978-94-010-0201-1. doi: 10.1007/978-94-010-0201-1 18. URL https://doi.org/10.1007/978-94-010-0201-1_18.

Anne Li-E Liu, David Wible, and Nai-Lung Tsao. Automated suggestions for miscollocations. In Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications, pages 47–50, 2009.

Edna Holland Mory. Feedback research revisited. In Handbook of research on educational communications and technology, pages 738–776. Routledge, 2013.

Ryo Nagata. Toward a task of feedback comment generation for writing learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3206–3215, 2019.

Ryo Nagata, Masato Hagiwara, Kazuaki Hanawa, Masato Mita, Artem Chernodub, and Olena Nahorna. Shared task on feedback comment generation for language learners. In Proceedings of the 14th International Conference on Natural Language Generation, pages 320–324, 2021.

Yixiao Song, Kalpesh Krishna, Rajesh Bhatt, Kevin Gimpel, and Mohit Iyyer. Gee! grammar error explanation with large language models. arXiv preprint arXiv:2311.09517, 2023.

Sam Witteveen and Martin Andrews. Paraphrasing with large language models. In Alexandra Birch, Andrew Finch, Hiroaki Hayashi, Ioannis Konstas, Thang Luong, Graham Neubig, Yusuke Oda, and Katsuhito Sudoh, editors, Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 215–220,Hong Kong, November 2019. Association for Computational Linguistics. doi:10.18653/v1/D19-5623. URL https://aclanthology.org/D19-5623.

Jingheng Ye, Yinghui Li, Qingyu Zhou, Yangning Li, Shirong Ma, Hai-Tao Zheng, and Ying Shen. CLEME: Debiasing multi-reference evaluation for grammatical error correction. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6174–6189, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.378. URL
https://aclanthology.org/2023.emnlp-main.378.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *