|
[1] L. Weng, “From autoencoder to betavae.” http://lilianweng.github.io/lil-log/ 2018/08/12/from-autoencoder-to-beta-vae.html,2018. [2] C.Doersch,“Tutorialonvariationalautoencoders,”2016. [3] D. P. Kingma and M. Welling, “An introduction to variational autoencoders,” FoundationsandTrendsinMachineLearning,pp.1–18,2019. [4] A.TöscherandM.Jahrer,“Thebigchaossolutiontothenetflixgrandprize,”2009. [5] N. Jiang, S. Jin, Z. Duan, and C. Zhang, “Rlduet: Online music accompaniment generationusingdeepreinforcementlearning,”2020. [6] S. I. Mimilakis, E. Cano, J. Abeßer, and G. Schuller, “New sonorities for jazz recordings: Separationandmixingusingdeepneuralnetworks,”2016. [7] R.K.Zaripov,“Analgorithmicdescriptionofaprocessofmusicalcomposition,” Dokl.Akad.NaukSSSR,p.1283–1286,1960. [8] A.Roberts,J.Engel,C.Raffel,C.Hawthorne,andD.Eck,“Ahierarchicallatent vectormodelforlearninglongtermstructureinmusic,”2018. [9] A. Karpathy, P. Abbeel, G. Brockman, P. Chen, V. Cheung, R. Duan, I. Goodfellow, D. Kingma, J. Ho, R. Houthooft, T. Salimans, J. Schulman, I. Sutskever, and W. Zaremba, “Generative models.” https://openai.com/blog/ generative-models/,2016. [10] G. Brunner, A. Konrad, Y. Wang, and R. Wattenhofer, “Midivae: Modeling dynamicsandinstrumentationofmusicwithapplicationstostyletransfer,”2018. [11] S.Dai,Z.Zhang,andG.G.Xia,“Musicstyletransfer: Apositionpaper,”2018. [12] C. Raffel, “Learningbased methods for comparing sequences, with applications toaudiotomidialignmentandmatching,”PhDThesis,2016. [13] O. M. Bjørndalen and R. Binkys, “Mido.” https://mido.readthedocs.io/en/ latest/index.html,2013. [14] E.Theron,“pymidi.”https://pypi.org/project/py-midi/,2017. [15] B.McFee,C.Raffel,D.Liang,D.P.Ellis,M.McVicar,E.Battenbergk,andO.Nieto,“librosa: Audioandmusicsignalanalysisinpython,”2015. 62 [16] C.RaffelandD.P.W.Ellis,“Intuitiveanalysis,creationandmanipulationofmidi data with pretty_midi,” 15th International Conference on Music Information RetrievalLateBreakingandDemoPapers,2014. [17] D.M.Huber,TheMIDIManual. Carmel,Indiana: SAMS,1991. [18] B.BenwardandM.N.Saker,Music: InTheoryandPractice. Boston: McGrawHill,2003. [19] D.E.Rumelhart,G.E.Hinton,andR.J.Williams,“Learninginternalrepresentationsbyerrorpropagation,”Nature,p.533–536,1986. [20] D.H.Ballard,“Modularlearninginneuralnetworks,”1987. [21] B. G. Tabachnick and L. S. Fidell, Using Multivariate Statistics. Boston: Allyn andBacon,2001. [22] J.N.Amaral,M.Buro,R.Elio,J.Hoover,I.Nikolaidis,M.Salavatipour,L.Stewart,andK.Wong,“Aboutcomputingscienceresearchmethodology,” [23] W. Faria, “Midi music data extraction using music21 and word2vec on kaggle.” https://towardsdatascience.com/ midi-music-data-extraction-using-music21-and-word2vec-on-kaggle-cb383261cd4e, 2018. [24] M.Cuthbert,C.Ariza,B.Hogue,andJ.W.Oberholtzer,“music21project.”http: //web.mit.edu/music21/,2006. [25] D.EckandJ.Schmidhuber,“Afirstlookatmusiccompositionusinglstmrecurrent neuralnetworks,”2002. [26] X.Hou,K.Sun,L.Shen,andG.Qiu,“Deepfeatureconsistentvariationalautoencoder,”2016. |