|
[1] 方銘健. (1997). 藝術、音樂情感與意義: 全音樂譜出版社. [2] 施詠. (2008). 中國人音樂審美心理概論: 上海音樂出版社. [3] Sak, H., A.W. Senior, and F. Beaufays, Long short-term memory recurrent neural network architectures for large scale acoustic modeling. 2014. [4] Merriam, A.P. and V. Merriam. (1964). The anthropology of music: Northwestern University Press. [5] Holbrook, M.B. and R.M. Schindler, Some Exploratory Findings on the Development of Musical Tastes. Journal of Consumer Research, 1989. 16(1): p. 119-124. [6] Panksepp, J., The Emotional Sources of "Chills" Induced by Music. Music Perception, 1995. 13: p. 171-207. [7] Industry, I.F.o.t.P. (2020). IFPI issues annual Global Music Report. Retrieved from https://www.ifpi.org/ifpi-issues-annual-global-music-report/ [8] 洪子翔. (2010). 運用樣式關聯性回饋之高效性內涵式音樂檢索. 國立成功大學, Available from Airiti AiritiLibrary database. (2010年) [9] Martin, K.D., E.D. Scheirer, and B.L. Vercoe. Music content analysis through models of audition. in Proc. ACM Multimedia Workshop on Content Processing of Music for Multimedia Applications. 1998. Bristol UK (ACM, New York. [10] 王鴻文 and 劉志俊, MP3音樂的聆賞情緒自動分類. Journal of Information Technology and Applications(資訊科技與應用期刊), 2010. 4(4): p. 160-171. [11] Rolland, P.-Y. FIExPat: Flexible extraction of sequential patterns. in Proceedings 2001 IEEE International Conference on Data Mining. 2001. IEEE. [12] 李宏儒, et al. 多模式音樂檢索系統. in 第三屆數位典藏技術研討會, 中央研究院, Taiwan. 2004. 第三屆數位典藏技術研討會. [13] 梁敬偉 and J.W. Liang. 基於不同音樂特徵的音樂檢索方法的效果及效率比較 Comparing Music Retrieval Methods with Different Music Features. Retrieved from http://nccuir.lib.nccu.edu.tw/handle/140.119/32664 [14] Yang, Y.-H., C.-C. Liu, and H.H. Chen. (2006). Music emotion classification: a fuzzy approach. In Proceedings of the 14th ACM international conference on Multimedia (pp. 81–84). Santa Barbara, CA, USA: Association for Computing Machinery. [15] Downie, X., C. Laurier, and M. Ehmann. The 2007 MIREX audio mood classification task: Lessons learned. in Proc. 9th Int. Conf. Music Inf. Retrieval. 2008. [16] 陳傳祐. (2012). 基於節拍同步和音色不變量之音色頻譜和交叉遞回圖分析之翻唱歌曲辨識系統. 國立臺灣大學, Available from Airiti AiritiLibrary database. (2012年) [17] 高茂源. (2007). 以音樂分類偵測音樂節奏之方法. (碩士). 國立中山大學, 高雄市. Retrieved from https://hdl.handle.net/11296/bepc92 [18] Long, J., E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. [19] 周晏如. (2016). 由華語流行歌詞探勘歌詞的特徵樣式. (碩士). 國立政治大學, 台北市. Retrieved from https://hdl.handle.net/11296/6tgp69 [20] 葉佳慧. (2001). 以音符及節拍為主的音樂檢索系統. (碩士). 國立清華大學, 新竹市. Retrieved from https://hdl.handle.net/11296/563t6m [21] 吳尚璠. (2014). 基於MIDI之內涵式鋼琴音樂檢索系統. (碩士). 國立交通大學, 新竹市. Retrieved from https://hdl.handle.net/11296/87q8js [22] 林書宇. (2017). 線上音樂之探索式尋求行為研究. (碩士). 國立臺灣師範大學, 台北市. Retrieved from https://hdl.handle.net/11296/2md2yn [23] 趙廣絜. (2015). 當音樂產業走入數位時代. Retrieved from https://castnet.nctu.edu.tw/index.php/castnet/article/8785?issueID=703 [24] 劉幸茹. (2016). 影響串流音樂購買意願之重要因素-以台灣為例. (碩士). 國立臺北大學, 新北市. Retrieved from https://hdl.handle.net/11296/dthm32 [25] Leung, S., Catching the K-Pop wave: globality in the production, distribution, and consumption of South Korean popular music. 2012. [26] 廖淑敏. (2013). 韓國流行音樂在台灣之發展與成功因素分析. (碩士). 中國文化大學, 台北市. Retrieved from https://hdl.handle.net/11296/jyyp6f [27] Kim, D. (2012). Reappropriating Desires in Neoliberal Societies through KPop. UCLA, [28] Howard, K., et al. (2006). Korean pop music: Riding the wave: Global Oriental. [29] Kim, Y.E., et al. Music emotion recognition: A state of the art review. in Proc. ismir. 2010. [30] Yang, Y.-H., et al., A regression approach to music emotion recognition. IEEE Transactions on audio, speech, and language processing, 2008. 16(2): p. 448-457. [31] 林芷伊. (2012). 基於多重結構分析聆聽情緒相似度之音樂資訊檢索. (碩士). 國立交通大學, 新竹市. Retrieved from https://hdl.handle.net/11296/kk56db [32] Hoashi, K., K. Matsumoto, and N. Inoue. Personalization of user profiles for content-based music retrieval based on relevance feedback. in Proceedings of the eleventh ACM international conference on Multimedia. 2003. [33] Typke, R., F. Wiering, and R.C. Veltkamp. A survey of music information retrieval systems. in Proc. 6th international conference on music information retrieval. 2005. Queen Mary, University of London. [34] McKay, C. (2004). Automatic genre classification of MIDI recordings. McGill University Canada, [35] Dixon, S., F. Gouyon, and G. Widmer. Towards Characterisation of Music via Rhythmic Patterns. in ISMIR. 2004. Citeseer. [36] Shih, H.-H., S.S. Narayanan, and C.-C. Kuo. Automatic main melody extraction from MIDI files with a modified Lempel-Ziv algorithm. in Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing. ISIMP 2001 (IEEE Cat. No. 01EX489). 2001. IEEE. [37] Zhao, F., Y. Wu, and J. Su, Melody extraction method from polyphonic MIDI based on melodic features. Jisuanji Gongcheng/ Computer Engineering, 2007. 33(2): p. 165-167. [38] Cui, B., et al. Exploring composite acoustic features for efficient music similarity query. in Proceedings of the 14th ACM international conference on Multimedia. 2006. [39] Tseng, Y.H. Music indexing and retrieval for digital music libraries. in Proceedings of the Fifth Joint Conference on Information Sciences, JCIS 2000. 2000. [40] 蔡寶德. (2009). 利用機器學習分析數位鋼琴演奏情緒之線上輔助學習系統. (碩士). 國立新竹教育大學, 新竹市. Retrieved from https://hdl.handle.net/11296/un6c5f [41] Uitdenbogerd, A.L. and J. Zobel. Manipulation of music for melody matching. in Proceedings of the sixth ACM international conference on Multimedia. 1998. [42] Ghias, A., et al. Query by humming: musical information retrieval in an audio database. in Proceedings of the third ACM international conference on Multimedia. 1995. [43] Lee, W. and A.L. Chen. Efficient multifeature index structures for music data retrieval. in Storage and Retrieval for Media Databases 2000. 1999. International Society for Optics and Photonics. [44] Hu, X., J.S. Downie, and A.F. Ehmann, Lyric text mining in music mood classification. American music, 2009. 183(5,049): p. 2-209. [45] Jareanpon, C., et al. Automatic lyrics classification system using text mining technique. in 2018 International Workshop on Advanced Image Technology (IWAIT). 2018. IEEE. [46] Scholes, P.A. (1960). The Oxford Companion to Music: Self-indexed and with a Pronouncing Glossary and Over 1,100 Portraits and Pictures: Oxford University Press. [47] Campbell, M. and J. Brody. (2007). Rock and Roll: an introduction: Cengage Learning. [48] Benward, B. (2014). Music in Theory and Practice Volume 1 (Vol. 1): McGraw-Hill Higher Education. [49] Watson, C. (2003). The Everything Songwriting Book: All You Need to Create and Market Hit Songs: Simon and Schuster. [50] Davidson, M. and K. Heartwood. (1996). Songwriting for Beginners. In: Alfred Music Publishing. [51] Kang, H. and H. Kouh, Music pattern analysis of K-POP. Journal of Digital Convergence, 2013. 11(3): p. 95-100. [52] Davidson, J. and K. Heartwood. (1996). Songwriting for Beginners: An Easy Beginning Method: Alfred Music. [53] 何旻璟. (2004). 以主題為基礎的音樂結構性分析. (碩士). 國立政治大學, 台北市. Retrieved from https://hdl.handle.net/11296/u8jb3m [54] Stein, L. (1962). Structure and style: the study and analysis of musical forms: Summy-Birchard Company. [55] 柯文傑. (2004). 利用分段技術擷取出重要的音樂片段. (碩士). 國立成功大學, 台南市. Retrieved from https://hdl.handle.net/11296/y9kddz [56] 胡勝揚. (2015). 自動化歌曲分段和主副歌辨識. (碩士). 國立交通大學, 新竹市. Retrieved from https://hdl.handle.net/11296/968hsq [57] Widmer, G. and W. Goebl, Computational models of expressive music performance: The state of the art. Journal of New Music Research, 2004. 33(3): p. 203-216. [58] Juslin, P.N. and J. Sloboda. (2011). Handbook of music and emotion: Theory, research, applications: Oxford University Press. [59] Wieczorkowska, A., et al. Extracting emotions from music data. in International Symposium on Methodologies for Intelligent Systems. 2005. Springer. [60] Hevner, K., The affective value of pitch and tempo in music. The American Journal of Psychology, 1937. 49(4): p. 621-630. [61] Farnsworth, P.R., The social psychology of music. 1958. [62] Pearce, M.T. and A.R. Halpern, Age-related patterns in emotions evoked by music. Psychology of Aesthetics, Creativity, and the Arts, 2015. 9(3): p. 248. [63] Thayer, R.E. (1990). The biopsychology of mood and arousal: Oxford University Press. [64] Seo, Y.-S. and J.-H. Huh, Automatic emotion-based music classification for supporting intelligent IoT applications. Electronics, 2019. 8(2): p. 164. [65] Watson, D., L.A. Clark, and A. Tellegen, Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of personality and social psychology, 1988. 54(6): p. 1063. [66] Russell, J.A., A circumplex model of affect. Journal of personality and social psychology, 1980. 39(6): p. 1161. [67] Banse, R. and K.R. Scherer, Acoustic profiles in vocal emotion expression. Journal of personality and social psychology, 1996. 70(3): p. 614. [68] Watson, D. and A. Tellegen, Toward a consensual structure of mood. Psychological bulletin, 1985. 98(2): p. 219. [69] Schimmack, U. and A. Grob, Dimensional models of core affect: A quantitative comparison by means of structural equation modeling. European Journal of Personality, 2000. 14(4): p. 325-345. [70] Bigand, E., et al., Multidimensional scaling of emotional responses to music: The effect of musical expertise and of the duration of the excerpts. Cognition & Emotion, 2005. 19(8): p. 1113-1139. [71] Zentner, M. and T. Eerola. (2010). Self-Report Measures and Models. [72] Schlosberg, H., Three dimensions of emotion. Psychological review, 1954. 61(2): p. 81. [73] Greenberg, D.M., et al., The song is you: Preferences for musical attribute dimensions reflect personality. Social Psychological and Personality Science, 2016. 7(6): p. 597-605. [74] Fricke, K.R., et al., Computer-based music feature analysis mirrors human perception and can be used to measure individual music preference. Journal of research in personality, 2018. 75: p. 94-102. [75] 鄺世銘. (2019). 以卷積神經網路預測以 Valence & Arousal二維情緒座標表示之 音樂情緒. (碩士). 輔仁大學, 新北市. Retrieved from https://hdl.handle.net/11296/y9a7t9 [76] 邱慧珊. (2013). 基於生理訊號變化即時偵測音樂誘發情緒研究. (碩士). 國立交通大學, 新竹市. Retrieved from https://hdl.handle.net/11296/6dz2bu [77] Fritz, T., et al., Universal recognition of three basic emotions in music. Current biology, 2009. 19(7): p. 573-576. [78] Kennaway, J., Musical Hypnosis: Sound and Selfhood from Mesmerism to Brainwashing. Social History of Medicine, 2011. 25(2): p. 271-289. [79] Diserens, C.M., The influence of music on behavior. 1926. [80] Hodges, D.A., Bodily responses to music. The Oxford handbook of music psychology, 2009: p. 121-130. [81] 蔡振家, 從內模仿到訊息整合:再探音樂引發情緒的機制. 藝術評論, 2019(37): p. 1-49. [82] Juslin, P.N. and D. Vastfjall, Emotional responses to music: The need to consider underlying mechanisms. Behavioral and brain sciences, 2008. 31(5): p. 559. [83] Juslin, P.N., From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Physics of life reviews, 2013. 10(3): p. 235-266. [84] Marchionini, G., Exploratory search: from finding to understanding. Communications of the ACM, 2006. 49(4): p. 41-46. [85] 蔡光明. (2009). 監督式與非監督式機器學習技術應用於商品評論的文件探勘之研究. (碩士). 國立高雄應用科技大學, 高雄市. Retrieved from https://hdl.handle.net/11296/28t9n7 [86] MacQueen, J. Some methods for classification and analysis of multivariate observations. in Proceedings of the fifth Berkeley symposium on mathematical statistics and probability. 1967. Oakland, CA, USA. [87] Dave, M.A., Review of “Information Theory, Inference, and Learning Algorithms by David J. C. MacKay”, Cambridge University Press, 2003. SIGACT News, 2006. 37(4): p. 34–36. [88] Haggblade, M., Y. Hong, and K. Kao, Music genre classification. Department of Computer Science, Stanford University, 2011. 131: p. 132. [89] 劉寧漢. (2005). 複音音樂資料庫之近似搜尋. (博士). 國立清華大學, 新竹市. Retrieved from https://hdl.handle.net/11296/77t6h7 [90] 王皓昱. (2014). 利用 Kinect 裝置自動產生舞蹈系統. (碩士). 國立中正大學, 嘉義縣. Retrieved from https://hdl.handle.net/11296/92b429 [91] Dempster, A.P., N.M. Laird, and D.B. Rubin, Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 1977. 39(1): p. 1-22. [92] Moon, T.K., The expectation-maximization algorithm. IEEE Signal processing magazine, 1996. 13(6): p. 47-60. [93] Zweig, M.H. and G. Campbell, Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine. Clinical chemistry, 1993. 39(4): p. 561-577. [94] 楊利英, 覃征, and 張選平, 分類器模擬算法及其應用. 西安交通大學學報, 2005. 39(12): p. 1311-1314. [95] Grachten, M. and G. Widmer, Linear Basis Models for Prediction and Analysis of Musical Expression. Journal of New Music Research, 2012. 41(4): p. 311-322. [96] Gales, M.J., Maximum likelihood linear transformations for HMM-based speech recognition. Computer speech & language, 1998. 12(2): p. 75-98. [97] Yang, Y.-H. and H.H. Chen. (2011). Music emotion recognition: CRC Press. [98] Seber, G.A. and A.J. Lee. (2012). Linear regression analysis (Vol. 329): John Wiley & Sons. [99] Choi, J.B. and R. Maliangkay. (2015). K-pop: The International Rise of the Korean Music Industry: Routledge. [100] Xu, W.W., J.-y. Park, and H.W. Park, Longitudinal dynamics of the cultural diffusion of Kpop on YouTube. Quality & Quantity, 2017. 51(4): p. 1859-1875. [101] 沈子涵. (2012). 江南style影響大,有望首次助韓國文化實現年順差. In. [102] Hu, X., et al. A Cross-cultural Study of Mood in K-pop Songs. in Proceedings of Fifteenth International Society for Music Information Retrieval Conference. 2014. Citeseer. [103] 李家恕. (2018). 韓國流行音樂樂曲之全球化創新策略 : K-Pop特殊製程之文本分析. (碩士). 世新大學, 臺北市. Retrieved from https://hdl.handle.net/11296/dmr47p [104] Mapelli, F. and R. Lancini. Audio hashing technique for automatic song identification. in International Conference on Information Technology: Research and Education, 2003. Proceedings. ITRE2003. 2003. IEEE. [105] Lancini, R., F. Mapelli, and R. Pezzano. Audio content identification by using perceptual hashing. in 2004 IEEE International Conference on Multimedia and Expo (ICME)(IEEE Cat. No. 04TH8763). 2004. IEEE. [106] Streich, S. and P. Herrera. Detrended fluctuation analysis of music signals: Danceability estimation and further semantic characterization. in In Proceedings of the AES 118th Convention. 2005. Citeseer. [107] Krause, A.E., A.C. North, and L.Y. Hewitt, Music-listening in everyday life: Devices and choice. Psychology of music, 2015. 43(2): p. 155-170. |