|
[1] Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang,Songhao Piao, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. Unilmv2: Pseudo-masked language models for unified language model pre-training, 2020.
[2] Federico Barrios, Federico Lpez, Luis Argerich, and Rosa Wachenchauzer. Variations of the similarity function of textrank for automated summarization, 2016.
[3] Yen-Chun Chen and Mohit Bansal. Fast abstractive summarization with reinforce-selected sentence rewriting, 2018.
[4] James Clarke and Mirella Lapata. Discourse constraints for document compression. Computational Linguistics, 36(3):411–441, 2010.
[5] Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. A discourse-aware attention model for abstractive summarization of long documents, 2018.
[6] T. A. Cohn and M. Lapata. Sentence compression as tree transduction.Journal of Artificial Intelligence Research, 34:637–674, Apr 2009.
[7] Dorottya Demszky, Kelvin Guu, and Percy Liang. Transforming question answering datasets into natural language inference datasets, 2018.
[8] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019.
[9] Yue Dong, Andrei Romascanu, and Jackie C. K. Cheung. Hiporank: Incorporatinghierarchical and positional information into graph-based unsupervised long document extractive summarization, 2020
[10] Esin Durmus, He He, and Mona Diab. FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization. CoRR,abs/2005.03754, 2020.
[11] G. Erkan and D. R. Radev. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457–479, Dec 2004.
[12] Sebastian Gehrmann, Yuntian Deng, and Alexander M. Rush. Bottom-up abstractive summarization, 2018.
[13] Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, WillKay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. Advances in neural information processing systems, 28:1693–1701, 2015.
[14] Arne Holst. Total data volume worldwide 2010-2025.
[15] Matthew Honnibal and Ines Montani. spaCy 2: Natural language understanding withBloom embeddings, convolutional neural networks, and incremental parsing. To ap-pear, 2017.
[16] Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun.A unified model for extractive and abstractive summarization using inconsistency loss, 2018
[17] Philippe Laban, Andrew Hsi, John Canny, and Marti A. Hearst. The summary loop: Learning to write abstractive summaries without examples. InProceedings of the 58thAnnual Meeting of the Association for Computational Linguistics, pages 5135–5150, Online, July 2020. Association for Computational Linguistics.
[18] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoisingsequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online, July 2020. Association for Computational Linguistics.
[19] Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. InTextSummarization Branches Out, pages 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics.
[20] Yang Liu and Mirella Lapata. Text summarization with pretrained encoders, 2019.
[21] Rada Mihalcea and Paul Tarau. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages404–411, Barcelona, Spain, July 2004. Association for Computational Linguistics.
[22] Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. Summarunner: A recurrent neural network-based sequence model for extractive summarization of documents, 2016.
[23] Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, and Bing Xiang. Abstractive text summarization using sequence-to-sequence runs and beyond, 2016.
[24] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization, 2017.
[25] Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, RuofeiZhang, and Ming Zhou. ProphetNet: Predicting future n-gram for sequence-to-Sequence Pre-training. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2401–2410, Online, November 2020. Association for Computational Linguistics.
[26] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018.
[27] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI blog,1(8):9, 2019.
[28] Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and VaibhavaGoel. Self-critical sequence training for image captioning, 2017.
[29] Alexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization, 2015.
[30] Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. Answers unite! unsupervised metrics for reinforced summarization models, 2019
[31] Abigail See, Peter J. Liu, and Christopher D. Manning. Get to the point: Summarization with pointer-generator networks, 2017.
[32] Khushboo S. Thakkar, R.V. Dharaskar, and M.B. Chandak. Graph-based algorithms for text summarization. In2010 3rd International Conference on Emerging Trends in Engineering and Technology, pages 516–519, 2010.
[33] Alex Wang, Kyunghyun Cho, and Mike Lewis. Asking and answering questions to evaluate the factual consistency of summaries, 2020.
[34] Dongling Xiao, Han Zhang, Yukun Li, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. Ernie-gen: An enhanced multi-flow pre-training and fine-tuning framework for natural language generation, 2020.
[35] Christopher C Yang and Fu Lee Wang. Hierarchical summarization of large documents.Journal of the American Society for Information Science and Technology,59(6):887–902, 2008.
[36] Jingling Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. Pegasus: Pre-training with extracted gap sentences for abstractive summarization, 2020.
[37] Hao Zheng and Mirella Lapata. Sentence centrality revisited for unsupervised summarization, 2019.
[38] Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and XuanjingHuang. Extractive summarization as text matching, 2020
[39] Chenguang Zhu, William Hinthorn, Ruochen Xu, Qingkai Zeng, Michael Zeng, Xue-dong Huang, and Meng Jiang. Enhancing factual consistency of abstractive summarization, 2021.
|