帳號:guest(216.73.216.88)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):林畇劭
作者(外文):Lin, Yun-Shao
論文名稱(中文):開發對談情境中語談者交互過程計算框架
論文名稱(外文):Building Conversation-Oriented Interlocutors Interaction Process Modeling Framework
指導教授(中文):李祈均
指導教授(外文):Lee, Chi-Chun
口試委員(中文):冀泰石
林嘉文
曹昱
洪樂文
口試委員(外文):Chi, Tai-Shih
Lin, Chia-Wen
Tsao, Yu
Hong, Yao-Win
學位類別:博士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:105061527
出版年(民國):111
畢業學年度:110
語文別:中文
論文頁數:83
中文關鍵詞:人類行為訊號處理對話互動建模表達行為溝通功能
外文關鍵詞:Behavioral Signal ProcessingConversationInteraction ModelingExpressive BehaviorCommunicative Function
相關次數:
  • 推薦推薦:0
  • 點閱點閱:312
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
互動是串連起人際關係非常重要的方式,在多樣化的互動情境中,對話是日常中常見且最自然的互動方式之一,透過以語音語言為主以及肢體動作臉部表情為輔,大量資訊如意見、感覺及情緒等,得以在這樣的交互複雜行為模式下相互交流。在這篇論文中我們嘗試建立複雜計算框架,來全面性地研究對話過程中的兩個重要面向:表達行為在語者間的交互模式以及溝通功能在語者間對談過程的整體表現。基於互動行為呈現在不同情境是高度異質化的,我們專注在兩個重要的互動情境來研究對話過程。第一,我們專注在自閉症亞群的表達行為呈現,希望透過自動化的方式去區分過去難以直接識別的亞群間行為差異;第二,我們專注在工作群體的溝通功能呈現,希望藉由計算且自動化的方式自動預測群體互動結果。
在此論文中,我們設計計算框架來自動化辨識行為模式用以解決領域內問題。針對表達行為,我們提出多模態IM-aBLSTM 的網絡,透過對於行為在時序列上的進展,以及在互動過程表達行為的同步現象,建立對應的時序列類神經網路以及語者調變的注意力機制,藉由更深入地對自閉症亞群在與ADOS 施測過程中互動行為與施測者行為模式的學習,來凸顯不同自閉症亞群間的差異。針對溝通功能,我們提出以互動過程分析為核心框架的兩階段計算框架,在第一階段內我們以包含監督式學習以及自編碼架構的SIPA 網路來學習語者在表達溝通意圖的行為呈現;在第二階段中,我們透過整合高度抽象的溝通意圖在整體互動資訊的呈現來預測團隊分數。整體而言,我們的計算框架在各個不同領域內問題上,與目前最先進的算法比較都能得到更好的辨識結果,同時透過對於辨識模型的分析,我們更近一步能夠以不同的角度來理解互動過程中語者間不同對談行為模式。
Interaction is a crucial means to maintain interpersonal relationships. Among diversified interaction scenarios, conversation is a prevalent and natural way for humans to interact with others. Given the crucial role of conversation process in our daily life and the complexity that the behavior manifested during the conversation process, we aim to comprehensively study the hybrid forms of the conversation process in this dissertation. By focusing on the two important facets, which are expressive behaviors and communicative functions, we quantitatively study the hybrid forms of conversation process by building effective computational frameworks to learn the interaction process between interlocutors.
Specifically, we design two domain-specific computational frameworks to perform automatic prediction tasks to solve the in-domain problems. First, in order to differentiate the behavior differences between ASD subgroups, we propose a Multimodal IM-aBLSTM framework to model the progression of the expressive behavior between interlocutors. Second, in order to automatically predict the group performance score, we propose the interaction process guided framework for learning the representation of communicative function in the complex small group
conversation. Overall, our proposed computational framework shows progress toward solving the in-domain problem because the prediction performance of our proposed method outperforms the state-of-the-art methods. Furthermore, our proposed framework also sheds the light on the analysis of the interaction process between interlocutors during the conversation.
摘要 i
Abstract iii
Acknowledgements v
Contents vii
List of Figures ix
List of Tables xi
1 Introduction 1
1.1 Background and Motivation 1
1.2 Research Goal 2
1.2.1 Expressive Behavior 3
1.2.2 Communicative Function 5
1.3 Dissertation Organization 6
2 Expressive Behavior: A Multimodal Interlocutor-Modulated Attention BLSTM for Classifying
Autism Subgroups 7
2.1 Introduction 7
2.2 Methodology 11
2.2.1 The ADOS Audio-Video Database 11
2.2.2 Multimodal Interlocutor-Modulated Attentional BLSTM 14
2.3 Experimental Setup and Results 23
2.3.1 Experimental Setup 23
2.3.2 Experiment Result and Analysis 27
2.4 Conclusion and future works 33
3 Communicative Function: An Interaction Process Guided Framework For Group Performance
Prediction 35
3.1 Introduction 35
3.2 Related Work 40
3.2.1 Computational Modeling of Group Task Performance 40
3.2.2 Multimodal Corpora with Group Task Performance 40
3.3 Methodology 42
3.3.1 The NTUBA Database 42
3.3.2 Interaction Process Guided Framework 45
3.4 Experiment Setup and Results 50
3.4.1 Experimental Setup 50
3.4.2 Experimental Result 52
3.4.3 Comparison with SOTA learning Methods 54
3.4.4 Analysis of Model Parameters 56
3.4.5 Robustness of the framework 60
3.4.6 Analysis of Communication Process 60
3.5 Conclusion and future works 65
4 Conclusion 67
4.1 Summary 67
4.2 Future Work 68
References 69
[1] T. Driskell, J. E. Driskell, C. S. Burke, and E. Salas, “Team roles: A review and integration,” Small Group Research, vol. 48, no. 4, pp. 482–511, 2017.
[2] S.-L. Yeh, Y.-S. Lin, and C.-C. Lee, “An interaction-aware attention network for speech emotion recognition in spoken dialogs,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6685–6689, IEEE, 2019.
[3] Y.-S. Lin and C.-C. Lee, “Deriving dyad-level interaction representation using interlocutors structural and expressive multimodal behavior features.,” in INTERSPEECH, pp. 2366–2370, 2017.
[4] I.Naim,M.I.Tanveer,D.Gildea,andM.E.Hoque,“Automatedpredictionandanalysis of job interview performance: The role of what you say and how you say it,” in 2015 11th IEEE international conference and workshops on automatic face and gesture recognition (FG), vol. 1, pp. 1–6, IEEE, 2015.
[5] T. Choudhury and S. Basu, “Modeling conversational dynamics as a mixed-memory markov process,” Advances in neural information processing systems, vol. 17, 2004.
[6] U. Malik, J. Saunier, K. Funakoshi, and A. Pauchet, “Who speaks next? turn change and next speaker prediction in multimodal multiparty interaction,” in 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), pp. 349–354, IEEE, 2020.
[7] D. Aneja, R. Hoegen, D. McDuff, and M. Czerwinski, “Understanding conversational and expressive style in a multimodal embodied conversational agent,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–10, 2021.
[8] R. Niewiadomski, S. J. Hyniewska, and C. Pelachaud, “Computational models of expressive behaviors for a virtual agent.,” 2014.
[9] Y. Matsusaka, S. Fujie, and T. Kobayashi, “Modeling of conversational strategy for the robot participating in the group conversation,” in Seventh European conference on speech communication and technology, Citeseer, 2001.
[10] B.-H.Su,C.-M.Chang,Y.-S.Lin,andC.-C.Lee,“Improvingspeechemotionrecognition using graph attentive bi-directional gated recurrent unit network.,” in INTERSPEECH, pp. 506–510, 2020.
[11] C.-C. Lee, A. Katsamanis, M. P. Black, B. R. Baucom, A. Christensen, P. G. Georgiou, and S. S. Narayanan, “Computing vocal entrainment: A signal-derived pca-based quantification scheme with application to affect analysis in married couple interactions,” Computer Speech & Language, vol. 28, no. 2, pp. 518–539, 2014.
[12] M. Nasir, B. Baucom, C. Bryan, S. Narayanan, and P. Georgiou, “Modeling vocal entrainment in conversational speech using deep unsupervised learning,” IEEE Transactions on Affective Computing, 2020.
[13] Z.Huang,W.Xu,andK.Yu,“Bidirectionallstm-crfmodelsforsequencetagging,”arXiv preprint arXiv:1508.01991, 2015.
[14] G. Shang, A. J.-P. Tixier, M. Vazirgiannis, and J.-P. Lorré, “Speaker-change aware crf for dialogue act classification,” arXiv preprint arXiv:2004.02913, 2020.
[15] S. Okada, Y. Ohtake, Y. I. Nakano, Y. Hayashi, H.-H. Huang, Y. Takase, and K. Nitta, “Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets,” in Proceedings of the 18th ACM International Conference on Multimodal Interaction, pp. 169–176, 2016.
[16] R.Ishii,K.Otsuka,S.Kumano,R.Higashinaka,andJ.Tomita,“Analyzinggazebehavior and dialogue act during turn-taking for estimating empathy skill level,” in Proceedings of the 20th ACM International Conference on Multimodal Interaction, pp. 31–39, 2018.
[17] R. Ishii, K. Otsuka, S. Kumano, R. Higashinaka, and J. Tomita, “Estimating interpersonal reactivity scores using gaze behavior and dialogue act during turn-changing,” in International Conference on Human-Computer Interaction, pp. 45–53, Springer, 2019.
[18] Y.-S.Lin,S.S.-F.Gau,andC.-C.Lee,“Amultimodalinterlocutor-modulatedattentional blstm for classifying autism subgroups during clinical interviews,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 2, pp. 299–311, 2020.
[19] C.Antaki,R.Barnes,andI.Leudar,“Self-disclosureasasituatedinteractionalpractice,” British journal of social psychology, vol. 44, no. 2, pp. 181–199, 2005.
[20] V. J. Derlega and J. H. Berg, Self-disclosure: Theory, research, and therapy. Plenum Press, 1987.
[21] P. C. Cozby, “Self-disclosure: a literature review.,” Psychological bulletin, vol. 79, no. 2, p. 73, 1973.
[22] S. M. Jourard and P. Lasakow, “Some factors in self-disclosure.,” The Journal of Abnormal and Social Psychology, vol. 56, no. 1, p. 91, 1958.
[23] J.-P. Laurenceau, L. F. Barrett, and P. R. Pietromonaco, “Intimacy as an interpersonal process: The importance of self-disclosure, partner disclosure, and perceived partner responsiveness in interpersonal exchanges.,” Journal of personality and social psychology, vol. 74, no. 5, p. 1238, 1998.
[24] K.DINDIA,M.Fitzpatrick,andD.Kenny,“Self-disclosureinspouseandstrangerinteraction a social relations analysis,” Human Communication Research HUM COMMUN RES, vol. 23, pp. 388–412, 03 1997.
[25] N. L. Collins and L. C. Miller, “Self-disclosure and liking: a meta-analytic review.,” Psychological bulletin, vol. 116, no. 3, p. 457, 1994.
[26] J. G. Shapiro, H. H. Krauss, and C. B. Truax, “Therapeutic conditions and disclosure beyond the therapeutic encounter.,” Journal of Counseling Psychology, vol. 16, no. 4, p. 290, 1969.
[27] B. A. Farber, “Patient self-disclosure: A review of the research,” Journal of Clinical Psychology, vol. 59, no. 5, pp. 589–600, 2003.
[28] A. E. Kelly, “Helping construct desirable identities: A self-presentational view of psychotherapy.,” Psychological Bulletin, vol. 126, no. 4, p. 475, 2000.
[29] M. Worthy, A. L. Gary, and G. M. Kahn, “Self-disclosure as an exchange process.,” Journal of personality and social psychology, vol. 13, no. 1, p. 59, 1969.
[30] A. W. Gouldner, “The norm of reciprocity: A preliminary statement,” American sociological review, pp. 161–178, 1960.
[31] J. K. Burgoon, L. A. Stern, and L. Dillman, Interpersonal adaptation: Dyadic interaction patterns. Cambridge University Press, 2007.
[32] S.L.KooleandW.Tschacher,“Synchronyinpsychotherapy:Areviewandanintegrative framework for the therapeutic alliance,” Frontiers in psychology, vol. 7, p. 862, 2016.
[33] P.Mundy,M.Sigman,J.Ungerer,andT.Sherman,“Definingthesocialdeficitsofautism: The contribution of non-verbal communication measures,” Journal of child psychology and psychiatry, vol. 27, no. 5, pp. 657–669, 1986.
[34] R. E. McEvoy, S. J. Rogers, and B. F. Pennington, “Executive function and social communication deficits in young autistic children,” Journal of child psychology and psychiatry, vol. 34, no. 4, pp. 563–578, 1993.
[35] M. Turner, “Annotation: Repetitive behaviour in autism: A review of psychological research,” The Journal of Child Psychology and Psychiatry and Allied Disciplines, vol. 40, no. 6, pp. 839–849, 1999.
[36] K. A. Pelphrey, J. P. Morris, and G. McCarthy, “Neural basis of eye gaze processing deficits in autism,” Brain, vol. 128, no. 5, pp. 1038–1048, 2005.
[37] H. Tager-Flusberg, R. Paul, C. Lord, et al., “Language and communication in autism,” Handbook of autism and pervasive developmental disorders, vol. 1, pp. 335–364, 2005.
[38] C.Lord,M.Rutter,S.Goode,J.Heemsbergen,H.Jordan,L.Mawhood,andE.Schopler, “Austism diagnostic observation schedule: A standardized observation of communicative and social behavior,” Journal of autism and developmental disorders, vol. 19, no. 2, pp. 185–212, 1989.
[39] E. K. Farran, A. Branson, and B. J. King, “Visual search for basic emotional expressions in autism; impaired processing of anger, fear and sadness, but a typical happy face advantage,” Research in Autism Spectrum Disorders, vol. 5, no. 1, pp. 455–462, 2011.
[40] L.-H. Quek, K. Sofronoff, J. Sheffield, A. White, and A. Kelly, “Co-occurring anger in young people with asperger’s syndrome,” Journal of clinical psychology, vol. 68, no. 10, pp. 1142–1148, 2012.
[41] A. C. Samson, W. M. Wells, J. M. Phillips, A. Y. Hardan, and J. J. Gross, “Emotion regulation in autism spectrum disorder: evidence from parent interviews and children’s daily diaries,” Journal of Child Psychology and Psychiatry, vol. 56, no. 8, pp. 903–913, 2015.
[42] B.P.Ho,J.Stephenson,andM.Carter,“Angerinchildrenwithautismspectrumdisorder: Parent’s perspective.,” International Journal of Special Education, vol. 27, no. 2, pp. 14– 32, 2012.
[43] L. Capps, N. Yirmiya, and M. Sigman, “Understanding of simple and complex emotions in non-retarded children with autism,” Journal of Child Psychology and Psychiatry, vol. 33, no. 7, pp. 1169–1182, 1992.
[44] C. Rieffe, M. M. Terwogt, and K. Kotronopoulou, “Awareness of single and multiple emotions in high-functioning children with autism,” Journal of autism and developmental disorders, vol. 37, no. 3, pp. 455–465, 2007.
[45] A. C. Laurent and E. Rubin, “Challenges in emotional regulation in asperger syndrome and high-functioning autism,” Topics in Language Disorders, vol. 24, no. 4, pp. 286–297, 2004.
[46] C. Rieffe, P. Oosterveld, M. M. Terwogt, S. Mootz, E. Van Leeuwen, and L. Stockmann, “Emotion regulation and internalizing symptoms in children with autism spectrum disorders,” Autism, vol. 15, no. 6, pp. 655–670, 2011.
[47] A. C. Samson, A. Y. Hardan, I. A. Lee, J. M. Phillips, and J. J. Gross, “Maladaptive behavior in autism spectrum disorder: The role of emotion experience and emotion regulation,” Journal of Autism and Developmental Disorders, vol. 45, no. 11, pp. 3424–3432, 2015.
[48] D. Bone, C.-C. Lee, M. P. Black, M. E. Williams, S. Lee, P. Levitt, and S. Narayanan, “The psychologist as an interlocutor in autism spectrum disorder assessment: Insights from a study of spontaneous prosody,” Journal of Speech, Language, and Hearing Research, vol. 57, no. 4, pp. 1162–1177, 2014.
[49] D. Bone, S. Bishop, R. Gupta, S. Lee, and S. S. Narayanan, “Acoustic-prosodic and turn-taking features in interactions with children with neurodevelopmental disorders.,” in Interspeech, pp. 1185–1189, 2016.
[50] S. R. Leekam, S. J. Libby, L. Wing, J. Gould, and C. Taylor, “The diagnostic interview for social and communication disorders: algorithms for icd-10 childhood autism and wing and gould autistic spectrum disorder,” Journal of Child Psychology and Psychiatry, vol. 43, no. 3, pp. 327–342, 2002.
[51] M. Mordre, B. Groholt, A. K. Knudsen, E. Sponheim, A. Mykletun, and A. M. Myhre, “Is long-term prognosis for pervasive developmental disorder not otherwise specified different from prognosis for autistic disorder? findings from a 30-year follow-up study,” Journal of autism and developmental disorders, vol. 42, no. 6, pp. 920–928, 2012.
[52] C.Lord,E.Petkova,V.Hus,W.Gan,F.Lu,D.M.Martin,O.Ousley,L.Guy,R.Bernier, J. Gerdts, et al., “A multisite study of the clinical diagnosis of different autism spectrum disorders,” Archives of general psychiatry, vol. 69, no. 3, pp. 306–313, 2012.
[53] A.P.Associationetal.,Diagnosticandstatisticalmanualofmentaldisorders(DSM-5®). American Psychiatric Pub, 2013.
[54] U. Frith, “Emanuel miller lecture: Confusions and controversies about asperger syndrome,” Journal of child psychology and psychiatry, vol. 45, no. 4, pp. 672–686, 2004.
[55] T. Bennett, P. Szatmari, S. Bryson, J. Volden, L. Zwaigenbaum, L. Vaccarella, E. Duku, and M. Boyle, “Differentiating autism and asperger syndrome on the basis of language delay or impairment,” Journal of autism and developmental disorders, vol. 38, no. 4, pp. 616–625, 2008.
[56] C.Ecker,W.Spooren,andD.Murphy,“Developingnewpharmacotherapiesforautism,” Journal of internal medicine, vol. 274, no. 4, pp. 308–320, 2013.
[57] S. Odom, K. Hume, B. Boyd, and A. Stabel, “Moving beyond the intensive behavior treatment versus eclectic dichotomy: Evidence-based and individualized programs for learners with asd,” Behavior Modification, vol. 36, no. 3, pp. 270–297, 2012.
[58] L. Schreibman, “Intensive behavioral/psychoeducational treatments for autism: Research needs and future directions,” Journal of autism and developmental disorders, vol. 30, no. 5, pp. 373–378, 2000.
[59] C.-P. Chen, S. S.-F. Gau, and C.-C. Lee, “Toward differential diagnosis of autism spectrum disorder using multimodal behavior descriptors and executive functions,” Computer Speech & Language, vol. 56, pp. 17–35, 2019.
[60] C.-P. Chen, X.-H. Tseng, S. S.-F. Gau, and C.-C. Lee, “Computing multimodal dyadic behaviors during spontaneous diagnosis interviews toward automatic categorization of autism spectrum disorder,” in Proc. Interspeech, 2017.
[61] Y.-S. Lin, S. S.-F. Gau, and C.-C. Lee, “An interlocutor-modulated attentional lstm for differentiating between subgroups of autism spectrum disorder,” Proc. Interspeech 2018, pp. 2329–2333, 2018.
[62] C. Lord, M. Rutter, and A. Le Couteur, “Autism diagnostic interview-revised: a revised version of a diagnostic interview for caregivers of individuals with possible pervasive developmental disorders,” Journal of autism and developmental disorders, vol. 24, no. 5, pp. 659–685, 1994.
[63] M. P. Black, D. Bone, M. E. Williams, P. Gorrindo, P. Levitt, and S. Narayanan, “The usc care corpus: Child-psychologist interactions of children with autism spectrum disorders,” in Twelfth Annual Conference of the International Speech Communication Association, Citeseer, 2011.
[64] E.Billing,T.Belpaeme,H.Cai,H.-L.Cao,A.Ciocan,C.Costescu,D.David,R.Homewood, D. Hernandez Garcia, P. Gómez Esteban, et al., “The dream dataset: Supporting a data-driven study of autism spectrum disorder and robot enhanced therapy,” PloS one, vol. 15, no. 8, p. e0236939, 2020.
[65] T. Guha, Z. Yang, A. Ramakrishna, R. B. Grossman, D. Hedley, S. Lee, and S. S. Narayanan, “On quantifying facial expression-related atypicality of children with autism spectrum disorder,” in 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 803–807, IEEE, 2015.
[66] P.BoersmaandD.Weenink,“Praat-asystemfordoingphoneticsbycomputer[computer software],” The Netherlands: Institute of Phonetic Sciences, University of Amsterdam, 2003.
[67] L. Centelles, C. Assaiante, K. Etchegoyhen, M. Bouvard, and C. Schmitz, “Understanding social interaction in children with autism spectrum disorders: does whole-body motion mean anything to them?,” L’Encephale, vol. 38, no. 3, pp. 232–240, 2012.
[68] E. Milne, J. Swettenham, and R. Campbell, “Motion perception and autistic spectrum disorder: a review,” Current Psychology of Cognition, vol. 23, no. 1/2, p. 3, 2005.
[69] S. Tsermentseli, J. M. Oâ￿￿Brien, and J. V. Spencer, “Comparison of form and motion coherence processing in autistic spectrum disorders and dyslexia,” Journal of Autism and Developmental Disorders, vol. 38, no. 7, pp. 1201–1210, 2008.
[70] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person 2d pose estimation using part affinity fields,” in CVPR, 2017.
[71] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh, “Convolutional pose machines,” in CVPR, 2016.
[72] J.Sánchez,F.Perronnin,T.Mensink,andJ.Verbeek,“Imageclassificationwiththefisher vector: Theory and practice,” International journal of computer vision, vol. 105, no. 3, pp. 222–245, 2013.
[73] H. Kaya, A. A. Karpov, and A. A. Salah, “Fisher vectors with cascaded normalization for paralinguistic analysis,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
[74] S.-W. Hsiao, H.-C. Sun, M.-C. Hsieh, M.-H. Tsai, Y. Tsao, and C.-C. Lee, “Toward automating oral presentation scoring during principal certification program using audiovideo low-level behavior profiles,” IEEE Transactions on Affective Computing, 2017.
[75] A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional lstm and other neural network architectures,” Neural Networks, vol. 18, no. 5-6, pp. 602– 610, 2005.
[76] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[77] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
[78] S. Sharma, R. Kiros, and R. Salakhutdinov, “Action recognition using visual attention,” arXiv preprint arXiv:1511.04119, 2015.
[79] S.Mirsamadi,E.Barsoum,andC.Zhang,“Automaticspeechemotionrecognitionusing recurrent neural networks with local attention,” in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pp. 2227–2231, IEEE, 2017.
[80] J. Gibson, D. Can, P. Georgiou, D. C. Atkins, and S. S. Narayanan, “Attention networks for modeling behaviors in addiction counseling,” in Proc. Interspeech, 2017.
[81] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[82] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” in NIPS-W, 2017.
[83] D. B. Shalom, S. Mostofsky, R. Hazlett, M. Goldberg, R. Landa, Y. Faran, D. McLeod, and R. Hoehn-Saric, “Normal physiological emotions but differences in expression of conscious feelings in children with high-functioning autism,” Journal of autism and developmental disorders, vol. 36, no. 3, pp. 395–400, 2006.
[84] A. C. Samson, O. Huber, and J. J. Gross, “Emotion regulation in asperger’s syndrome and high-functioning autism.,” Emotion, vol. 12, no. 4, p. 659, 2012.
[85] J. R. Hackman and C. G. Morris, “Group tasks, group interaction process, and group performance effectiveness: A review and proposed integration,” in Advances in experimental social psychology, vol. 8, pp. 45–99, Elsevier, 1975.
[86] M. E. Gist, E. A. Locke, and M. S. Taylor, “Organizational behavior: Group structure, process, and effectiveness,” Journal of Management, vol. 13, no. 2, pp. 237–257, 1987.
[87] U. Avci and O. Aran, “Predicting the performance in decision-making tasks: From individual cues to group interaction,” IEEE Transactions on Multimedia, vol. 18, no. 4, pp. 643–658, 2016.
[88] U. Kubasova, G. Murray, and M. Braley, “Analyzing verbal and nonverbal features for predicting group performance,” arXiv preprint arXiv:1907.01369, 2019.
[89] G. Murray and C. Oertel, “Predicting group performance in task-based interaction,” in Proceedings of the 2018 on International Conference on Multimodal Interaction, pp. 14– 20, ACM, 2018.
[90] M.Nowak,J.Kim,N.W.Kim,andC.Nass,“Socialvisualizationandnegotiation:effects of feedback configuration and status,” in Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, pp. 1081–1090, 2012.
[91] J. M. DiMicco, K. J. Hollenbach, A. Pandolfo, and W. Bender, “The impact of increased awareness while face-to-face,” Human–Computer Interaction, vol. 22, no. 1-2, pp. 47– 96, 2007.
[92] Y. R. Tausczik and J. W. Pennebaker, “Improving teamwork using real-time language feedback,” in Proceedings of the SIGCHI conference on human factors in computing systems, pp. 459–468, 2013.
[93] G. Leshed, Automated language-based feedback for teamwork behaviors. Cornell University, 2009.
[94] T.B.RobyandJ.T.Lanzetta,“Workgroupstructure,communication,andgroupperformance,” Sociometry, vol. 19, no. 2, pp. 105–113, 1956.
[95] B. A. Olaniran, “Group performance in computer-mediated and face-to-face communication media,” Management Communication Quarterly, vol. 7, no. 3, pp. 256–281, 1994.
[96] S.R.Hiltz,K.Johnson,andM.Turoff,“Experimentsingroupdecisionmakingcommunication process and outcome in face-to-face versus computerized conferences,” Human communication research, vol. 13, no. 2, pp. 225–252, 1986.
[97] R.F.Bales,“Interactionprocessanalysis;amethodforthestudyofsmallgroups.,”1950.
[98] H. Bunt, J. Alexandersson, J.-W. Choe, A. C. Fang, K. Hasida, V. Petukhova, A. Popescu-Belis, and D. R. Traum, “Iso 24617-2: A semantically-based standard for dialogue annotation.,” in LREC, pp. 430–437, 2012.
[99] C. A. Gorse and S. Emmitt, “Communication behaviour during management and design team meetings: a comparison of group interaction,” Construction management and economics, vol. 25, no. 11, pp. 1197–1213, 2007.
[100] C. A. Gorse, S. Emmitt, M. Lowis, and A. Howarth, “Project performance and management and design team communication,” in Proceedings of the Association of Researchers in Construction Management, 17th Annual Conference, pp. 5–7, 2001.
[101] J. R. Hackman and N. Katz, “Group behavior and performance.,” 2010.
[102] S. A. Wheelan, The handbook of group research and practice. Sage, 2005.
[103] H. C. Foushee and K. L. Manos, “5. information transfer within the cockpit: Problems in mtracockplt communications,” C. E. Billings: Ames Research Center. E. S. Cheaney: Battelle’s Columbus Division, Mountain View, California., p. 63, 1981.
[104] H. C. Foushee, “Dyads and triads at 35,000 feet: Factors affecting group process and aircrew performance.,” American Psychologist, vol. 39, no. 8, p. 885, 1984.
[105] C. A. Gorse and S. Emmitt, “Informal interaction in construction progress meetings,” Construction Management and Economics, vol. 27, no. 10, pp. 983–993, 2009.
[106] S.Li,S.Okada,andJ.Dang,“Interactionprocesslabelrecognitioningroupdiscussion,” in 2019 International Conference on Multimodal Interaction, pp. 426–434, 2019.
[107] Y.-S. Lin and C.-C. Lee, “Predicting performance outcome with a conversational graph convolutional network for small group interactions,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8044– 8048, IEEE, 2020.
[108] U. Kubasova and G. Murray, “Group performance prediction with limited context,” in Companion Publication of the 2020 International Conference on Multimodal Interaction, pp. 191–195, 2020.
[109] S.-C.Zhong,Y.-S.Lin,C.-M.Chang,Y.-C.Liu,andC.-C.Lee,“Predictinggroupperformances using a personality composite-network architecture during collaborative task,” Proc. Interspeech 2019, pp. 1676–1680, 2019.
[110] McCowan, J. Carletta, W. Kraaij, S. Ashby, S. Bourban, M. Flynn, M. Guillemot, T. Hain, J. Kadlec, V. Karaiskos, et al., “The ami meeting corpus,” in Proceedings of the 5th international conference on methods and techniques in behavioral research, vol. 88, p. 100, Citeseer, 2005.
[111] F. Pianesi, M. Zancanaro, B. Lepri, and A. Cappelletti, “A multimodal annotated corpus of consensus decision making meetings,” Language Resources and Evaluation, vol. 41, no. 3-4, pp. 409–429, 2007.
[112] D.Sanchez-Cortes,O.Aran,M.S.Mast,andD.Gatica-Perez,“A non verbal behavior approach to identify emergent leaders in small groups,” IEEE Transactions on Multimedia, vol. 14, no. 3, pp. 816–832, 2012.
[113]M.Braley and G.Murray,“The group affect and performance (gap) corpus,”in Proceedings of the ICMI 2018 Workshop on Group Interaction Frontiers in Technology (GIFT), 2018.
[114] Bhattacharya, M. Foley, C. Ku, N. Zhang, T. Zhang, C. Mine, M. Li, H. Ji, C. Riedl, B. F. Welles, et al., “The unobtrusive group interaction (ugi) corpus,” in Proceedings of the 10th ACM Multimedia Systems Conference, pp. 249–254, ACM, 2019.
[115] J. E. McGrath, Groups: Interaction and performance, vol. 14. Prentice-Hall Englewood Cliffs, NJ, 1984.
[116] P. C. Bottger and P. W. Yetton, “An integration of process and decision scheme explanations of group problem solving performance,” Organizational behavior and human decision processes, vol. 42, no. 2, pp. 234–249, 1988.
[117] Wiedow and U. Konradt, “Two-dimensional structure of team process improvement: Team reflection and team adaptation,” Small Group Research, vol. 42, no. 1, pp. 32–54, 2011.
[118] Somech, “The effects of leadership style and team process on performance and innovation in functionally heterogeneous teams,” Journal of management, vol. 32, no. 1, pp. 132–157, 2006.
[119] M. A. West, “Reflexivity, revolution and innovation in work teams,” in Product development teams, pp. 1–29, Jai Press, 2000.
[120] N. Clarke, “Emotional intelligence and learning in teams,” Journal of Workplace Learning, 2010.
[121] J. Ohlsson, “Team learning: Collective reflection processes in teacher teams,” Journal of Workplace Learning, 2013.

[122] W. Woolley, C. F. Chabris, A. Pentland, N. Hashmi, and T. W. Malone, “Evidence for a collective intelligence factor in the performance of human groups,” science, vol. 330, no. 6004, pp. 686–688, 2010.
[123] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
[124] V. Bazarevsky, Y. Kartynnik, A. Vakunov, K. Raveendran, and M. Grundmann, “Blazeface: Sub-millisecond neural face detection on mobile gpus,” arXiv preprint arXiv:1907.05047, 2019.
[125] Y. Kartynnik, A. Ablavatski, I. Grishchenko, and M. Grundmann, “Real-time facial surface geometry from monocular video on mobile gpus,” arXiv preprint arXiv:1907.06724, 2019.
[126] V. Bazarevsky, I. Grishchenko, K. Raveendran, T. Zhu, F. Zhang, and M. Grundmann, “Blazepose: On-device real-time body pose tracking,” arXiv preprint arXiv:2006.10204, 2020.
[127] L. Le, A. Patterson, and M. White, “Supervised autoencoders: Improving generalization performance with unsupervised regularizers,” in Advances in Neural Information Processing Systems (S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, eds.), vol. 31, Curran Associates, Inc., 2018.

[128] S. K. Subburaj, A. E. Stewart, A. Ramesh Rao, and S. K. D’Mello, “Multimodal, multiparty modeling of collaborative problem solving performance,” in Proceedings of the 2020 International Conference on Multimodal Interaction, pp. 423–432, 2020.
[129] M. J. Martin and P. W. Foltz, “Automated team discourse annotation and performance prediction using lsa,” tech. rep., NEW MEXICO STATE UNIV LAS CRUCES, 2004.
[130] L. Gonzales, J. T. Hancock, and J. W. Pennebaker, “Language style matching as a predictor of social dynamics in small groups,” Communication Research, vol. 37, no. 1, pp. 3–19, 2010.
[131] D.Reitter and J.D.Moore,“Alignment and task success in spoken dialogue,”Journalof Memory and Language, vol. 76, pp. 29–46, 2014.
[132] U. Fischer, L. McDonnell, and J. Orasanu, “Linguistic correlates of team performance: Toward a tool for monitoring team functioning during space missions,” Aviation, space, and environmental medicine, vol. 78, no. 5, pp. B86–B95, 2007.
[133] Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, pp. 5998–6008, 2017.
[134] Y.-S. Lin and C.-C. Lee, “Using interlocutor-modulated attention blstm to predict personality traits in small group interaction,” in Proceedings of the 2018 on International Conference on Multimodal Interaction, pp. 163–169, ACM, 2018.
[135] F. Tschan, “Ideal cycles of communication (or cognitions) in triads, dyads, and individuals,” Small Group Research, vol. 33, no. 6, pp. 615–643, 2002.
[136] F. Tschan, “Communication enhances small group performance if it conforms to task requirements: The concept of ideal communication cycles,” Basic and Applied Social Psychology, vol. 17, no. 3, pp. 371–393, 1995.
(此全文20270510後開放外部瀏覽)
電子全文
摘要
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top

相關論文

1. 透過語音特徵建構基於堆疊稀疏自編碼器演算法之婚姻治療中夫妻互動行為量表自動化評分系統
2. 基於健保資料預測中風之研究並以Hadoop作為一種快速擷取特徵工具
3. 一個利用人類Thin-Slice情緒感知特性所建構而成之全時情緒辨識模型新框架
4. 應用多任務與多模態融合技術於候用校長演講自動評分系統之建構
5. 基於多模態主動式學習法進行樣本與標記之間的關係分析於候用校長評鑑之自動化評分系統建置
6. 透過結合fMRI大腦血氧濃度相依訊號以改善語音情緒辨識系統
7. 結合fMRI之迴旋積類神經網路多層次特徵 用以改善語音情緒辨識系統
8. 針對實體化交談介面開發基於行為衡量方法於自閉症小孩之評估系統
9. 一個多模態連續情緒辨識系統與其應用於全域情感辨識之研究
10. 整合文本多層次表達與嵌入演講屬性之表徵學習於強健候用校長演講自動化評分系統
11. 利用聯合因素分析研究大腦磁振神經影像之時間效應以改善情緒辨識系統
12. 利用LSTM演算法基於自閉症診斷觀察量表訪談建置辨識自閉症小孩之評估系統
13. 利用多模態模型混合CNN和LSTM影音特徵以自動化偵測急診病患疼痛程度
14. 以雙向長短期記憶網路架構混和多時間粒度文字模態改善婚 姻治療自動化行為評分系統
15. 透過表演逐字稿之互動特徵以改善中文戲劇表演資料庫情緒辨識系統
 
* *