帳號:guest(18.116.49.23)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):柯力瑋
作者(外文):Ko, Li Wei
論文名稱(中文):開發一以心跳變異率與機器學習為基礎之個人化智慧型音樂選曲系統
論文名稱(外文):Developing a Personalized Intelligent Music Selection System Based on Heart Rate Variability and Machine Learning
指導教授(中文):邱銘傳
指導教授(外文):Chiu, Ming chuan
口試委員(中文):張堅琦
盧俊銘
學位類別:碩士
校院名稱:國立清華大學
系所名稱:工業工程與工程管理學系
學號:102034544
出版年(民國):104
畢業學年度:103
語文別:英文
論文頁數:80
中文關鍵詞:決策樹分析穿戴式裝置機器學習心跳變異率
外文關鍵詞:Decision TreeWearable deviceMachine learningHeart rate variability
相關次數:
  • 推薦推薦:0
  • 點閱點閱:546
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 音樂在人類的歷史中扮演著舉足輕重角色。可影響情緒與績效, 因此被廣泛地應用在提升績效上,如運動或醫療。例慢跑者聆聽適當的音樂 因此被廣泛地應用在提升績效上,如運動或醫療。例慢跑者聆聽適當的音樂 因此被廣泛地應用在提升績效上,如運動或醫療。例慢跑者聆聽適當的音樂 因此被廣泛地應用在提升績效上,如運動或醫療。例慢跑者聆聽適當的音樂 因此被廣泛地應用在提升績效上,如運動或醫療。例慢跑者聆聽適當的音樂 因此被廣泛地應用在提升績效上,如運動或醫療。例慢跑者聆聽適當的音樂 因此被廣泛地應用在提升績效上,如運動或醫療。例慢跑者聆聽適當的音樂 因此被廣泛地應用在提升績效上,如運動或醫療。例慢跑者聆聽適當的音樂 因此被廣泛地應用在提升績效上,如運動或醫療。例慢跑者聆聽適當的音樂 可以跑得更久。然而,大多時候人們僅會根據個喜好選擇歌曲聆聽如此一 可以跑得更久。然而,大多時候人們僅會根據個喜好選擇歌曲聆聽如此一 可以跑得更久。然而,大多時候人們僅會根據個喜好選擇歌曲聆聽如此一 可以跑得更久。然而,大多時候人們僅會根據個喜好選擇歌曲聆聽如此一 可以跑得更久。然而,大多時候人們僅會根據個喜好選擇歌曲聆聽如此一 可以跑得更久。然而,大多時候人們僅會根據個喜好選擇歌曲聆聽如此一 可以跑得更久。然而,大多時候人們僅會根據個喜好選擇歌曲聆聽如此一 可以跑得更久。然而,大多時候人們僅會根據個喜好選擇歌曲聆聽如此一 可以跑得更久。然而,大多時候人們僅會根據個喜好選擇歌曲聆聽如此一 來,便降低了音樂所能達成的預期效 來,便降低了音樂所能達成的預期效 來,便降低了音樂所能達成的預期效 果。 此外,手動挑歌亦是件相當耗時的行為果。 此外,手動挑歌亦是件相當耗時的行為果。 此外,手動挑歌亦是件相當耗時的行為果。 此外,手動挑歌亦是件相當耗時的行為果。 此外,手動挑歌亦是件相當耗時的行為果。 此外,手動挑歌亦是件相當耗時的行為因此,從如眾多的音樂中找出符合自己當下情緒時也更加困難。本研 因此,從如眾多的音樂中找出符合自己當下情緒時也更加困難。本研 因此,從如眾多的音樂中找出符合自己當下情緒時也更加困難。本研 因此,從如眾多的音樂中找出符合自己當下情緒時也更加困難。本研 因此,從如眾多的音樂中找出符合自己當下情緒時也更加困難。本研 因此,從如眾多的音樂中找出符合自己當下情緒時也更加困難。本研 因此,從如眾多的音樂中找出符合自己當下情緒時也更加困難。本研 因此,從如眾多的音樂中找出符合自己當下情緒時也更加困難。本研 因此,從如眾多的音樂中找出符合自己當下情緒時也更加困難。本研 究欲建立一智慧型選曲系統,本研方法以音樂資訊索引為基礎其可用來搜尋 究欲建立一智慧型選曲系統,本研方法以音樂資訊索引為基礎其可用來搜尋 究欲建立一智慧型選曲系統,本研方法以音樂資訊索引為基礎其可用來搜尋 究欲建立一智慧型選曲系統,本研方法以音樂資訊索引為基礎其可用來搜尋 究欲建立一智慧型選曲系統,本研方法以音樂資訊索引為基礎其可用來搜尋 與檢索特定類型的音樂。最後,將其以 APP 的方式呈現。此系統不僅能透過觀 測受者之心跳變異率,選擇適合使用聆聽音樂亦能情緒維持 測受者之心跳變異率,選擇適合使用聆聽音樂亦能情緒維持 測受者之心跳變異率,選擇適合使用聆聽音樂亦能情緒維持 測受者之心跳變異率,選擇適合使用聆聽音樂亦能情緒維持 測受者之心跳變異率,選擇適合使用聆聽音樂亦能情緒維持 在舒緩愉悅的狀態。這項研究期望提供有效音樂模式,助於治療創作和 在舒緩愉悅的狀態。這項研究期望提供有效音樂模式,助於治療創作和 在舒緩愉悅的狀態。這項研究期望提供有效音樂模式,助於治療創作和 在舒緩愉悅的狀態。這項研究期望提供有效音樂模式,助於治療創作和 在舒緩愉悅的狀態。這項研究期望提供有效音樂模式,助於治療創作和 音樂療法的相關領域應用。
Music plays an important role in our daily lives. Since music could affect people’s emotion, it has been widely applied to works or sports to enhance the performance of some particular tasks. For example, workers performed better when listening to music, joggers ran longer distance with proper music. However, people usually select music according to their personal preferences. Thus, it might reduce the effect. Therefore, this study aims to establish an intelligent music selection system to provide appropriate songs for users who work at home to enhance their performance. This study established an emotional music database through the classification of data mining. Innovative wearable sensing and computing system were applied to detect heart rate variability. Users’ emotions were predicted and appropriate songs were selected by application software (APP). Machine learning was employed to record the preference of users so as to accurate the precision of classification. The system could select appropriate music automatically by tracing the user’s heart rate variability. More importantly, through experimental validation, this system had been proved has high satisfaction and might not increase mental workload, keep users pleasant and improve performance. With the concept of IoT (Internet of Things) and implantation of diversified developed wearable devices, this system could provide more innovative applications for smart factory, smart home and health care.
Table of Contents
Abstract IV
Table of Contents V
List of Figures VII
List of Tables IX
1 Introduction 1
2 Literature Review 2
2.1 Music and Emotion 2
2.2 Emotion and Heart Rate Variability 3
2.3 Music features 6
2.4 Machine learning 9
2.5 Summary 10
3 Methodology 11
3.1 Phase I: Music Database Establishment 12
3.2 Phase II: Intelligent Music Selection Construction 15
4 Case Study 26
4.1 Phase I Emotional Music Database Establishment 26
4.2 Phase II Intelligent Music Selection System Establishment 33
4.3 Discussion 50
5 Conclusion 54
6 References 55
Appendix 59
Appendix A The analyzed result of training data 59
Appendix B The analyzed result of testing data 60
Appendix C The classified result of testing data 74
Appendix D The classified result of positive music 75
Appendix E The experiment results of SDNN signals 76
Appendix F The results of system usability scale questionnaires 77
Appendix G Reliability and validity of questionnaire 77
Appendix H The results of satisfaction questionnaires 78
Appendix I The results of NASA-TLX questionnaires 78
Appendix J The measurement model analysis 79
Appendix K The certification of IRB 82
[1] Mindlab. 2014. Does playing music at work increase productivity? Retrieved April 8, 2015, from http://themindlab.co.uk/
[2] Gray, E. 2013. Music: a therapy for all? Perspectives in public health, 133(1), 14
[3] Satoh, M., Ogawa, J. I., Tokita, T., Nakaguchi, N., Nakao, K., Kida, H., & Tomimoto, H. 2014. The effects of physical exercise with music on cognitive function of elderly people: Mihama-Kiho project. PloS one, 9(4), e95230.
[4] Kenny, D. 2004. Treatment Approaches for Music Performance Anxiety: What works? Music Forum.
[5] Juslin, P. N., & Laukka, P. 2004. Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening. Journal of New Music Research, 33(3), 217-238.
[6] Feng, Y., Zhuang, Y., & Pan, Y. 2003. Popular music retrieval by detecting mood. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval (pp. 375-376). ACM.
[7] Lu, L., Liu, D., & Zhang, H. J. 2006. Automatic mood detection and tracking of music audio signals. IEEE Transactions on audio, speech, and language processing, 14(1), 5-18.
[8] [8] Yang, Y. H., Lin, Y. C., Su, Y. F., & Chen, H. H. 2008. A regression approach to music emotion recognition. Audio, Speech, and Language Processing, IEEE Transactions on, 16(2), 448-457.
[9] Yang, Y. H., & Chen, H. H. 2012. Machine recognition of music emotion: A review. ACM Transactions on Intelligent Systems and Technology (TIST), 3(3), 40.
[10] Han, B. J., Rho, S., Jun, S., & Hwang, E. 2010. Music emotion classification and context-based music recommendation. Multimedia Tools and Applications, 47(3), 433-460.
[11] Russell, J. A. 1980. A circumplex model of affect. Journal of personality and social psychology, 39(6), 1161.
[12] Watson, D., Clark, L. A., & Tellegen, A. 1988. Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of personality and social psychology, 54(6), 1063.
[13] Levenson, Robert W. 2003. Blood, sweat, and fears. Annals of the New York Academy of Sciences 1000.1, 348-366.
[14] Eysenck, H. J., & Eysenck, M. W. 1985. Personality and individual differences: A natural science approach. New York: Plenum.
[15] Duda, R. O., Hart, P. E., & Stork, D. G. 2012. Pattern classification. John Wiley & Sons.
[16] Fu, Z., Lu, G., Ting, K. M., & Zhang, D. 2011. A survey of audio-based music classification and annotation. Multimedia, IEEE Transactions on 13(2), 303-319.
[17] Serra, J., Gómez, E., Herrera, P., & Serra, X. 2008. Chroma binary similarity and local alignment applied to cover song identification. Audio, Speech, and Language Processing, IEEE Transactions on, 16(6), 1138-1151.
[18] Yang, Y. H., Lin, Y. C., Su, Y. F., & Chen, H. H. 2008. A regression approach to music emotion recognition. Audio, Speech, and Language Processing, IEEE Transactions on, 16(2), 448-457.
[19] Shen, J., Shepherd, J., Cui, B., & Tan, K. L. 2009. A novel framework for efficient automated singer identification in large music databases. ACM Transactions on Information Systems (TOIS), 27(3), 18.
[20] Watson, K. B. 1942. The nature and measurement of musical meanings. Psychological Monographs: General and Applied, 54(2), i-43.
[21] Fairbanks, G., & Pronovost, W. 1939. An experimental study of the pitch characteristics of the voice during the expression of emotion∗. Communications Monographs, 6(1), 87-104.
[22] Fairbanks, G., & Hoaglin, L. W. 1941. An experimental study of the durational characteristics of the voice during the expression of emotion. Communications Monographs, 8(1), 85-90.
[23] Gabrielsson, A., Lindstrom, E. 2001. The influence of musical structure on emotional expression. In: Juslin, P.N., Sloboda, J.A. (Eds.), Music and Emotion: Theory and Research. Oxford University Press, Oxford, 223–248.
[24] Schellenberg, E. G., Krysciak, A. M., & Campbell, R. J. 2000. Perceiving emotion in melody: Interactive effects of pitch and rhythm. Music Perception, 155-171.
[25] Dalla Bella, S., Peretz, I., Rousseau, L., Gosselin, N. 2001. A developmental study of the affective value of tempo and mode in music, Cognition 80, 1–9.
[26] Khalfa, S., Schon, D., Anton, J.L., Liégeois-Chauvel, C. 2005. Brain regions involved in the recognition of sadness and happiness in music, Neuroreport 16 (18), 1981–1984.
[27] Davitz, J. R. 1964. The communication of emotional meaning.
[28] Fonagy, I., & Magdics, K. 1963. Emotional patterns in intonation and music. Zeitschrift für Phonetik 16(1-3), 293-326.
[29] Michalski, R. S., Carbonell, J. G., & Mitchell, T. M. (Eds.). 2013. Machine learning: An artificial intelligence approach. Springer Science & Business Media.
[30] Kohavi, R., & Provost, F. 1998. Glossary of terms. Machine Learning, 30(2-3), 271-274.
[31] Russel, S., & Norvig, P. 1994. Artificial Intelligence A Modern Approach. 1995. Cited on, 20.
[32] Weiss, S. I., and Kulikowski, C. 1991. Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Networks, Machine Learning, and Expert Systems. San Francisco, Calif.: Morgan Kaufmann.
[33] Hand, D. J. 1981. Discrimination and Classification. Chichester, U.K.: Wiley.
[34] Jiawei, H., & Kamber, M. 2001. Data mining: concepts and techniques. San Francisco, CA, itd: Morgan Kaufmann, 5.
[35] Malik, M. 2008. Standard measurement of heart rate variability. Dynamic electrocardiography, 13-21.
[36] Medicore, SA-3000P Clinical Manual ver. 3.0. Retrieved June 8, 2015, from http://medi-core.com/download/HRV_clinical_manual_ver3.0.pdf.
[37] Mioglobal®. Retrieved June 8, 2015, from https://www.mioglobal.com/Default.aspx.
[38] Samsung®. Retrieved June 28, 2015 from http://www.samsung.com/global/microsite/gear/gearlive_design.html
[39] Brooke, J. 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry, 189(194), 4-7.
[40] Newman, K. 2001. Interrogating SERVQUAL: a critical assessment of service quality measurement in a high street retail bank. International journal of bank marketing, 19(3), 126-139.
[41] Hart, S. G., & Staveland, L. E. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in psychology, 52, 139-183.
[42] Hsu, Y-W., Chiu, M-C. & Hwang, S-L. 2014. Investigating the Relationship between Therapeutic Music and Emotion: A Pilot Study on Healthcare Service. In Proceedings of the 21st ISPE Inc. International Conference on Concurrent Engineering, 688-697.
[43] Nunnally, J. 1978. Psychometric methods.
[44] Fayers, P. M., & Machin, D. 2007. Scores and measurements: validity, reliability, sensitivity. Quality of Life: The Assessment, Analysis and Interpretation of Patient-Reported Outcomes, Second edition, 77-108.
[45] Bangor, A., Kortum, P., & Miller, J. 2009. Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of usability studies, 4(3), 114-123.
[46] Parasuraman, A., Zeithaml, V. A., & Berry, L. L. 1985. A conceptual model of service quality and its implications for future research. the Journal of Marketing, 41-50.
[47] Bailey, J. E., & Pearson, S. W. 1983. Development of a tool for measuring and analyzing computer user satisfaction. Management science, 29(5), 530-545.
[48] Ives, B. and Olson, M. H. and Baroudi, J. J. 1983. The Measurement of User Information Satisfaction, Communication of the ACM, Oct. 1983, Vol. 26, Iss. 10, pp. 785-793.
[49] McClure, E. B., Pope, K., Hoberman, A. J., Pine, D. S., & Leibenluft, E. 2014. Facial expression recognition in adolescents with mood and anxiety disorders.
(此全文未開放授權)
電子全文
摘要
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *