|
[1] Yiming Cui, Wanxiang Che, Shijin Wang, and Ting Liu. Lert: A linguistically- motivated pre-trained language model, 2022. [2] Matthew Honnibal and Ines Montani. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. To ap- pear, 2017. [3] Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64–77, 2020. [4] Allison Lahnala, Charles Welch, David Jurgens, and Lucie Flek. A critical reflection and forward perspective on empathy and natural language processing. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2139–2158, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. [5] Qian Liu, Yihong Chen, Bei Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, and Dongmei Zhang. You impress me: Dialogue generation via mutual persona percep- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1417–1427, Online, July 2020. Association for Computational Lin- guistics. [6] Leland McInnes, John Healy, and Steve Astels. hdbscan: Hierarchical density based clustering. [7] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. [8] Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. Towards empathetic open-domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 5370–5381, Florence, Italy, July 2019. Association for Computational Linguistics. [9] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. CoRR, abs/1908.10084, 2019. [10] Mandana Saebi, Ernest Pusateri, Aaksha Meghawat, and Christophe Van Gysel. A discriminative entity-aware language model for virtual assistants. arXiv preprint arXiv:2106.11292, 2021. [11] Robyn Speer, Joshua Chin, and Catherine Havasi. Conceptnet 5.5: An open multilin- gual graph of general knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1), Feb. 2017. [12] Xiao Wang, Nian Liu, Hui Han, and Chuan Shi. Self-supervised heterogeneous graph neural network with co-contrastive learning. CoRR, abs/2105.09111, 2021. [13] Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay Pujara, Xiang Ren, Yang Liu, and Dilek Hakkani-Tur. Think before you speak: Explicitly generating implicit commonsense knowledge for response generation. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1237–1252, Dublin, Ireland, May 2022. Association for Computational Linguistics.
|