|
[1] Nan Jiang, Sheng Jin, Zhiyao Duan, and Changshui Zhang. Rl-duet: Online music accompaniment generation using deep reinforcement learning, 2020. [2] Yi Ren, Jinzheng He, Xu Tan, Tao Qin, Zhou Zhao, and Tie-Yan Liu. Popmag: Pop music accompaniment generation, 2020. [3] Yi Ren, Jinzheng He, Xu Tan, Tao Qin, Zhou Zhao, and Tie-Yan Liu. Popmag: Pop music accompaniment generation. pages 1198–1206, 10 2020. doi: 10.1145/ 3394171.3413721. [4] Andrew McLeod, Rodrigo Schramm, Mark Steedman, and Emmanouil Benetos. Au-tomatic transcription of polyphonic vocal music. Applied Sciences, 7(12), 2017. ISSN 2076-3417. doi: 10.3390/app7121285. URL https://www.mdpi.com/ 2076-3417/7/12/1285. [5] Li Su Zih-Sing Fu. Hierarchical classification networks for singing voice segmenta-tion and transcription. International Society for Music Information Retrieval Confer-ence (ISMIR), 2019. [6] Qiuqiang Kong, Bochen Li, Xuchen Song, Yuan Wan, and Yuxuan Wang. High-resolution piano transcription with pedals by regressing onsets and offsets times, 2020. [7] Ian Simon, Dan Morris, and Sumit Basu. Mysong: Automatic accompaniment gener-ation for vocal melodies. In Proceedings of the SIGCHI Conference on Human Fac-tors in Computing Systems, CHI ’08, page 725–734, New York, NY, USA, 2008. As-sociation for Computing Machinery. ISBN 9781605580111. doi: 10.1145/1357054. 1357169. URL https://doi.org/10.1145/1357054.1357169. [8] Li Luo, Peng-Fei Lu, and Zeng-Fu Wang. A real-time accompaniment system based on sung voice recognition. In 2008 19th International Conference on Pattern Recog-nition, pages 1–4, 2008. doi: 10.1109/ICPR.2008.4761071. [9] J. Salamon, E. Gomez, D. P. W. Ellis, and G. Richard. Melody extraction from poly-phonic music signals: Approaches, applications, and challenges. IEEE Signal Pro-cessing Magazine, 31(2):118–134, 2014. doi: 10.1109/MSP.2013.2271648. [10] A. Klapuri E. G´omez and B. Meudic. Melody description and extraction in the context of music content processing. J. New Music Res., vol. 32, no. 1, pp. 23–40, 2003. [11] M. Ryyn¨anen and A. Klapuri. Automatic transcription of melody, bass line, and chords in polyphonic music. Comput. Music J., vol. 32, no. 3, pp. 72–86, 2008. [12] V. Rao and P. Rao. Vocal melody extraction in the presence of pitched accompa-niment in polyphonic music. IEEE Transactions on Audio, Speech, and Language Processing, 18(8):2145–2154, 2010. doi: 10.1109/TASL.2010.2042124. [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015. [14] Jean-Pierre Briot, Ga¨etan Hadjeres, and Franc¸ois-David Pachet. Deep learning tech-niques for music generation – a survey, 2019. [15] Li Su. Vocal melody extraction using patch-based cnn, 2018. [16] Y.-H. Yang C.-Y. Liang, L. Su and H.-M. Lin. Musical offset detection of pitched instruments: The case of violin. In ISMIR, pages 281–287,, 2015. [17] S. B¨ock, A. Arzt, F. Krebs, and M. Schedl. Online real-time onset detection with recurrent neural networks. 2012. [18] Masanori MORISE, Fumiya YOKOMORI, and Kenji OZAWA. World: A vocoder-based high-quality speech synthesis system for real-time applications. IEICE Trans-actions on Information and Systems, E99.D(7):1877–1884, 2016. doi: 10.1587/transinf.2015EDP7457. [19] Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks, 2018. [20] Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Haibin Lin, Zhi Zhang, Yue Sun, Tong He, Jonas Mueller, R. Manmatha, Mu Li, and Alexander Smola. Resnest: Split-attention networks, 2020. [21] Florian Krebs Sebastian B¨ock and Gerhard Widmer. Joint beat and downbeat tracking with recurrent neural networks. Proceedings of the 17th International Society for Music Information Retrieval Conference (ISMIR), 2016. [22] Masanori Morise, Hideki Kawahara, and Haruhiro Katayose. Fast and reliable f0 estimation method based on the period extraction of vocal fold vibration of singing voice and speech. December 2009. AES 35th International Conference: Audio for Games ; Conference date: 11-02-2009 Through 13-02-2009. [23] Joaquin Mora, Francisco G´omez, Emilia G´omez, Francisco Javier Borrego, and Jose´ D´ıaz-B´a˜nez. Characterization and melodic similarity of a cappella flamenco cantes. Proceedings of the 11th International Society for Music Information Retrieval Con-ference, ISMIR 2010, pages 351–356, 01 2010. [24] E. G´omez and J. Bonada. Towards computer-assisted flamenco transcription: An ex-perimental comparison of automatic transcription algorithms as applied to a cappella singing. Computer Music Journal, 37(2):73–90, 2013. doi: 10.1162/COMJ a 00180. [25] L. J. Tard´on I. Barbancho-Perez E. Molina, A. M. Barbancho-Perez. Evaluation framework for automatic singing transcription. 2014.
|