|
[1] Vasile Rus, Zhiqiang Cai, and Art Graesser. Question generation: Example of a multi-year evaluation campaign. Online Proceedings of 1st Question Generation Workshop, 2008. [2] Megha Mishra, Vishnu Kumar Mishra, and HR Sharma. Question classification using semantic, syntactic and lexical features. International Journal of Web & Semantic Technology, 4(3):39, 2013. [3] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. The 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, 2016. [4] Anselm Rothe, Brenden M. Lake, and Todd M. Gureckis. Question asking as program generation. Neural Information Processing Systems (NIPS), page 1046–1055, 2017. [5] Michael Heilman and Noah A Smith. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609–617, 2010. [6] Yansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. Learning to ask questions in open-domain conversational systems with typed decoders. In Proceedings of the 56th Annual Meting of the Association for Computational Linguistics, pages 2193–2203, 2018. [7] Diana Lea and Jennifer Bradbery. Oxford Advanced Learner’s Dictionary 10th Edition. Oxford Advanced Learner’s Dictionary. Oxford University Press, 2020. [8] Rahul Sharnagat. Named entity recognition: A literature survey. Center For Indian Language Technology, pages 1–27, 2014. [9] Ralph Grishman and Beth M Sundheim. Message understanding conference-6: A brief history. In COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics, 1996. [10] Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60, 2014. [11] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. [12] Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. [13] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. [14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. [15] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014. [16] Sneha Chaudhari, Varun Mithal, Gungor Polatkan, and Rohan Ramanath. An attentive survey of attention models. ACM Transactions on Intelligent Systems and Technology (TIST), 12(5):1–32, 2021. [17] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27, 2014. [18] Ruslan Mitkov et al. Computer-aided generation of multiple-choice tests. In Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing, pages 17–22, 2003. [19] George A Miller. WordNet: An electronic lexical database. MIT press, 1998. [20] Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. Neural question generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 662–671. Springer, 2017. [21] Xinya Du, Junru Shao, and Claire Cardie. Learning to ask: Neural question generation for reading comprehension. arXiv preprint arXiv:1705.00106, 2017. [22] Yanghoon Kim, Hwanhee Lee, Joongbo Shin, and Kyomin Jung. Improving neural question generation using answer separation. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 6602–6609, 2019. [23] Xiyao Ma, Qile Zhu, Yanlin Zhou, and Xiaolin Li. Improving question generation with sentence-level semantic matching and answer position inferring. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8464–8471, 2020. [24] Yuichi Sasazawa, Sho Takase, and Naoaki Okazaki. Neural question generation using interrogative phrases. In Proceedings of the 12th International Conference on Natural Language Generation, pages 106–111, 2019. [25] Wenjie Zhou, Minghua Zhang, and Yunfang Wu. Question-type driven question generation. arXiv preprint arXiv:1909.00140, 2019. [26] Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901–3910, 2018. [27] Yu Chen, Lingfei Wu, and Mohammed J Zaki. Reinforcement learning based graph-to-sequence model for natural question generation. arXiv preprint arXiv:1908.04942, 2019. [28] Deepak Gupta, Kaheer Suleman, Mahmoud Adada, Andrew McNamara, and Justin Harris. Improving neural question generation using world knowledge. arXiv preprint arXiv:1909.03716, 2019. [29] Xin Jia, Hao Wang, Dawei Yin, and Yunfang Wu. Enhancing question generation with commonsense knowledge. In China National Conference on Chinese Computational Linguistics, pages 145–160. Springer, 2021. [30] Robyn Speer, Joshua Chin, and Catherine Havasi. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-first AAAI conference on artificial intelligence, 2017. [31] Xin Jia, Wenjie Zhou, Xu Sun, and Yunfang Wu. How to ask good questions? try to leverage paraphrases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6130–6140, 2020. [32] Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. Commonsense knowledge aware conversation generation with graph attention. In IJCAI, pages 4623–4629, 2018. [33] Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. Advances in neural information processing systems, 26, 2013. [34] Sepp Hochreiter and J ̈urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [35] Qingbao Huang, Mingyi Fu, Linzhang Mo, Yi Cai, Jingyun Xu, Pijian Li, Qing Li, and Ho-fung Leung. Entity guided question generation with contextual structure and sequence information capturing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13064–13072, 2021. [36] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025, 2015. [37] Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. Incorporat- ing copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393, 2016. [38] Luu Anh Tuan, Darsh Shah, and Regina Barzilay. Capturing greater context for question generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9065–9072, 2020. [39] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318, 2002. [40] Michael Denkowski and Alon Lavie. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation, pages 376–380, 2014. [41] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81, 2004. [42] Xu Chen and Jungang Xu. An answer driven model for paragraph-level question generation. In 2021 International Joint Conference on Neural Networks(IJCNN), pages 1–7. IEEE, 2021 |