|
[1] Gabriele Barbieri, Fran ̧cois Pachet, Pierre Roy, and Mirko Degli Esposti. Markov constraints for generating lyrics with style. In Ecai, volume 242, pages 115–120, 2012. [2] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet alloca- tion. Journal of machine Learning research, 3(Jan):993–1022, 2003. [3] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Ka- plan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. [4] Yihao Chen and Alexander Lerch. Melody-conditioned lyrics generation with seqgans. In 2020 IEEE International Symposium on Multimedia (ISM), pages 189–196. IEEE, 2020. [5] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. [6] Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story gen- eration. arXiv preprint arXiv:1805.04833, 2018. [7] Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4):681–694, 2020. 43 [8] Jing He, Ming Zhou, and Long Jiang. Generating chinese classical poems with statistical machine translation models. In Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012. [9] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019. [10] Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. Ctrl: A conditional transformer language model for control- lable generation. arXiv preprint arXiv:1909.05858, 2019. [11] Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics. [12] Eric Malmi, Pyry Takala, Hannu Toivonen, Tapani Raiko, and Aristides Gio- nis. Dopelearning: A computational approach to rap lyrics generation. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 195–204, 2016. [13] Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. Abstrac- tive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023, 2016. [14] Hugo Gon ̧calo Oliveira, Raquel Herv ́as, Alberto D ́ıaz, and Pablo Gerv ́as. Adapting a generic platform for poetry generation to produce spanish poems. In ICCC, pages 63–71, 2014. [15] Hugo R Gon ̧calo Oliveira, F Amilcar Cardoso, and Francisco C Pereira. Tra- la-lyrics: An approach to generate text based on rhythm. In Proceedings of the 4th. International Joint Workshop on Computational Creativity. A. Cardoso and G. Wiggins, 2007. 44 [16] Peter Potash, Alexey Romanov, and Anna Rumshisky. Ghostwriter: Using an lstm for automatic rap lyric generation. In Proceedings of the 2015 Confer- ence on Empirical Methods in Natural Language Processing, pages 1919–1924, 2015. [17] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084, 2019. [18] Gerard Salton and Christopher Buckley. Term-weighting approaches in auto- matic text retrieval. Information processing & management, 24(5):513–523, 1988. [19] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. [20] Jesse Vig. A multiscale visualization of attention in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 37–42, Florence, Italy, July 2019. Association for Computational Linguistics. [21] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156–3164, 2015. [22] Zhe Wang, Wei He, Hua Wu, Haiyang Wu, Wei Li, Haifeng Wang, and Enhong Chen. Chinese poetry generation with planning based neural network. arXiv preprint arXiv:1610.09889, 2016. [23] Kento Watanabe, Yuichiroh Matsubayashi, Satoru Fukayama, Masataka Goto, Kentaro Inui, and Tomoyasu Nakano. A melody-conditioned lyrics 45 language model. In Proceedings of the 2018 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 163–172, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. [24] Zhen Xu, Bingquan Liu, Baoxun Wang, Cheng-Jie Sun, Xiaolong Wang, Zhuoran Wang, and Chao Qi. Neural response generation via gan with an approximate embedding layer. In Proceedings of the 2017 conference on em- pirical methods in natural language processing, pages 617–626, 2017. [25] Xingxing Zhang and Mirella Lapata. Chinese poetry generation with recur- rent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 670–680, Doha, Qatar, October 2014. Association for Computational Linguistics. 46
|