|
[1] L. Baraldi, F. Paci, G. Serra, L. Benini, and R. Cucchiara. Gesture recognition in ego-centric videos using dense trajectories and hand segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 688-693, 2014. [2] A. L. Berger, V. J. D. Pietra, and S. A. D. Pietra. A maximum entropy approach to natural language processing. Computational linguistics, 22(1):39-71, 1996. [3] Shan-Wen Hsiao, Hung-Ching Sun, Ming-Chuan Hsieh, Ming-Hsueh Tsai, Hsin-Chih Lin, Chi-Chun Lee: A multimodal approach for automatic assessment of school principals' oral presentation during pre-service training program. INTERSPEECH 2015: 2529-2533. [4] F. Eyben, M. Wöllmer, and B. Schuller. Opensmile: the munich versatile and fast open-source audio feature extractor. In Proceedings of the 18th ACM international conference on Multimedia, pages 1459-1462. ACM, 2010. [5] P. S. Keung. Continuing professional development of principals in hong kong. Frontiers of Education in China, 2(4):605-619, 2007 [6] H. D. Kim, C. Zhai, and J. Han. Aggregation of multiple judgments for evaluating ordered lists. In Advances in information retrieval, pages 166-178. Springer, 2010. [7] C.-C. Lee, E. Mower, C. Busso, S. Lee, and S. Narayanan. Emotion recognition using a hierarchical binary decision tree approach. Speech Communication, 53(9):1162-1171, 2011. [8] C. M. Lee and S. S. Narayanan. Toward detecting emotions in spoken dialogs. Speech and Audio Processing, IEEE Transactions on, 13(2):293-303, 2005. [9] H. Gunes, M. Piccardi, and M. Pantic, From the lab to the real world: Affect recognition using multiple cues and modalities. InTech Education and Publishing, 2008. [10] I. Muslea, S. Minton, and C. A. Knoblock. Selective sampling with redundant views. In AAAI/IAAI, pages 621-626, 2000. [11] S. Narayanan and P. G. Georgiou. Behavioral signal processing: Deriving human behavioral informatics from speech and language. Proceedings of the IEEE, 101(5):1203-1233, 2013. [12] M. Prince. Does active learning work? a review of the research. Journal of engineering education, 93(3):223-231, 2004. [13] D. S. Cheng, H. Salamin, P. Salvagnini, M. Cristani, A. Vinciarelli, and V. Murino, “Predicting online lecture ratings based on gesturing and vocal behavior,” Journal on Multimodal User Interfaces, vol. 8, no. 2,pp. 151–160, 2014. [14] P. Salvagnini, H. Salamin, M. Cristani, A. Vinciarelli, and V. Murino. Learning how to teach from "videolectures": automatic prediction of lecture ratings based on teacher's nonverbal behavior. In Cognitive Infocommunications (CogInfoCom), 2012 IEEE 3rd International Conference on, pages 415-419. IEEE, 2012. [15] G. Schohn and D. Cohn. Less is more: Active learning with support vector machines. In ICML, pages 839-846. Citeseer, 2000. [16] B. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Devillers, C. MuLler, and S. Narayanan. Paralinguistics in speech and language-state-of-the-art and the challenge. Computer Speech & Language, 27(1):4-39, 2013. [17] B. Settles. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11, 2010. [18] A. Tamrakar, S. Ali, Q. Yu, J. Liu, O. Javed, A. Divakaran, H. Cheng, and H. Sawhney. Evaluation of low-level features and their combinations for complex event detection in open source videos. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 3681-3688. IEEE, 2012. [19] C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, and S. Narayanan, “Analysis of emotion recognition using facial expressions, speech and multimodal information,” in Proceedings of the 6th international conference on Multimodal interfaces. ACM, 2004, pp. 205–211. [20] E. Sariyanidi, H. Gunes, and A. Cavallaro, “Automatic analysis of facial affect: A survey of registration, representation, and recognition,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 37, no. 6, pp. 1113–1133, 2015. [21] H. Wang, A. Kläser, C. Schmid, and C.-L. Liu. Action recognition by dense trajectories. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 3169-3176. IEEE, 2011. [22] J. Zhu, H. Wang, T. Yao, and B. K. Tsou. Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 1137-1144. Association for Computational Linguistics, 2008. [23] X. Zhu. Semi-supervised learning literature survey. 2005. [24] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical report, Citeseer, 2002. [25] F. Perronnin, J. S´anchez, and T. Mensink, “Improving the fisher kernel for large-scale image classification,” in Computer Vision–ECCV 2010. Springer, 2010, pp. 143–156. [26] C.-C. Chang and C.-J. Lin, “Libsvm: A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 2, no. 3, p. 27, 2011. [27] K. Chatfield, V. S. Lempitsky, A. Vedaldi, and A. Zisserman, “The devil is in the details: an evaluation of recent feature encoding methods.” in BMVC, vol. 2, no. 4, 2011, pp. 8–19. [28] L. Cosmides, “Invariances in the acoustic expression of emotion during speech.” Journal of Experimental Psychology: Human Perception and Performance, vol. 9, no. 6, p. 864, 1983. [29] M. Grimm, K. Kroschel, E. Mower, and S. Narayanan, “Primitives-based evaluation and estimation of emotions in speech,” Speech Communication, vol. 49, no. 10, pp. 787–800, 2007. [30] D. Sztah´o, G. Kiss, and K. Vicsi, “Estimating the severity of parkinson’s disease from speech using linear regression and database partitioning,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015. [31] J. Kim, M. Nasir, R. Gupta, M. V. Segbroeck, D. Bone, M. Black, Z. I. Skordilis, Z. Yang, P. Georgiou, and S. Narayanan, “Automatic estimation of parkinsons disease severity from diverse speech tasks,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015. [32] T. F. Quatieri and N. Malyska, “Vocal-source biomarkers for depression: A link to psychomotor activity.” in Interspeech, 2012, pp. 1059–1062. [33] Kapur, Jagat Narain. Maximum-entropy models in science and engineering. John Wiley & Sons, 1989. [34] Jaynes, Edwin Thompson. "Clearing up mysteries—the original goal."Maximum Entropy and Bayesian Methods. Springer Netherlands, 1989. 1-27. [35] Rousseeuw, Peter J., and Annick M. Leroy. Robust regression and outlier detection. Vol. 589. John Wiley & Sons, 2005. [36] S. Watson, T. Miller, L. Johnston, and V. Rutledge, “Professional development school graduate performance: Perceptions of school principals,” The Teacher Educator, vol. 42, no. 2, pp. 77–86, 2006. [37] D. L. Keith, “Principal desirabilitiy for professional development,” Academy of Educational Leadership Journal, vol. 15, no. 2, p. 95, 2011. [38] M. El Ayadi, M. S. Kamel, and F. Karray, “Survey on speech emotion recognition: Features, classification schemes, and databases,” Pattern Recognition, vol. 44, no. 3, pp. 572–587, 2011. [39] M. Karg, A.-A. Samadani, R. Gorbet, K. Kuhnlenz, J. Hoey, and D. Kulic, “Body movements for affective expression: a survey of automatic recognition and generation,” Affective Computing, IEEE Transactions on, vol. 4, no. 4, pp. 341–359, 2013. [40] E. Crane and M. Gross, “Motion capture and emotion: Affect detection in whole body movement,” in Affective computing and intelligent interaction. Springer, 2007, pp. 95–101.
|