|
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Shabnam Behzad, Amir Zeldes, and Nathan Schneider. Sentence-level feedback generation for english language learners: Does data augmentation help? arXiv preprint arXiv:2212.08999, 2022.
Andrey Bout, Alexander Podolskiy, Sergey Nikolenko, and Irina Piontkovskaya. Efficient grammatical error correction via multi-task training and optimized training schedule. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5800–5816, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.355. URL https://aclanthology.org/2023.emnlp-main.355.
Steven Coyne, Diana Galvan-Sosa, Keisuke Sakaguchi, and Kentaro Inui. Developing a typology for language learning feedback. In Proceedings of the 29th Annual Conference of the Association for Natural Language Processing, Okinawa, Japan, 2023
Elozino Egonmwan and Yllias Chali. Transformer and seq2seq model for paraphrase generation. In Alexandra Birch, Andrew Finch, Hiroaki Hayashi, Ioannis Konstas, Thang Luong, Graham Neubig, Yusuke Oda, and Katsuhito Sudoh, editors, Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 249–255, Hong Kong, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5627. URL https://aclanthology.org/D19-5627.
John Hattie and Helen Timperley. The power of feedback. Review of educational research, 77(1):81–112, 2007.
Matthew Honnibal, Ines Montani, Sofie Van Landeghem, and Adriane Boyd. spaCy: Industrial-strength Natural Language Processing in Python, 2020. URL https://doi.org/10.5281/zenodo.1212303.
Gautier Izacard and Edouard Grave. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282, 2020.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020.
Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu. A survey on retrieval-augmented text generation. arXiv preprint arXiv:2202.01110, 2022.
Dekang Lin. Dependency-Based Evaluation of Minipar, pages 317–329. Springer Netherlands, Dordrecht, 2003. ISBN 978-94-010-0201-1. doi: 10.1007/978-94-010-0201-1 18. URL https://doi.org/10.1007/978-94-010-0201-1_18.
Anne Li-E Liu, David Wible, and Nai-Lung Tsao. Automated suggestions for miscollocations. In Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications, pages 47–50, 2009.
Edna Holland Mory. Feedback research revisited. In Handbook of research on educational communications and technology, pages 738–776. Routledge, 2013.
Ryo Nagata. Toward a task of feedback comment generation for writing learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3206–3215, 2019.
Ryo Nagata, Masato Hagiwara, Kazuaki Hanawa, Masato Mita, Artem Chernodub, and Olena Nahorna. Shared task on feedback comment generation for language learners. In Proceedings of the 14th International Conference on Natural Language Generation, pages 320–324, 2021.
Yixiao Song, Kalpesh Krishna, Rajesh Bhatt, Kevin Gimpel, and Mohit Iyyer. Gee! grammar error explanation with large language models. arXiv preprint arXiv:2311.09517, 2023.
Sam Witteveen and Martin Andrews. Paraphrasing with large language models. In Alexandra Birch, Andrew Finch, Hiroaki Hayashi, Ioannis Konstas, Thang Luong, Graham Neubig, Yusuke Oda, and Katsuhito Sudoh, editors, Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 215–220,Hong Kong, November 2019. Association for Computational Linguistics. doi:10.18653/v1/D19-5623. URL https://aclanthology.org/D19-5623.
Jingheng Ye, Yinghui Li, Qingyu Zhou, Yangning Li, Shirong Ma, Hai-Tao Zheng, and Ying Shen. CLEME: Debiasing multi-reference evaluation for grammatical error correction. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6174–6189, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.378. URL https://aclanthology.org/2023.emnlp-main.378.
|