|
[1] K. Cho, B. van Merrienboer, D. Bahdanau, and Y. Bengio, “On the prop- erties of neural machine translation: Encoder-decoder approaches,” CoRR, vol. abs/1409.1259, 2014. vii, 9, 10 [2] J. Chung, Ç. Gülçehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated re- current neural networks on sequence modeling,” CoRR, vol. abs/1412.3555, 2014. vii, 10 [3] N. Nishida and H. Nakayama, “Multimodal gesture recognition using multi-stream recurrent neural network,” in PSIVT, 2015. vii, 11, 12, 13, 17 [4] M. S. Ryoo, “Human activity prediction: Early recognition of ongoing activities from streaming videos,” in 2011 International Conference on Computer Vision (ICCV), 2011. 3, 5 [5] M. S. A. Akbarian, F. Saleh, M. Salzmann, B. Fernando, L. Petersson, and L. An- dersson, “Encouraging lstms to anticipate actions very early,” in 2017 IEEE Inter- national Conference on Computer Vision (ICCV), 2017. 3, 5 [6] T.-C. C. Tian Lan and S. Savarese, “A hierarchical representation for future action prediction,” in 2014 European Conference on Computer Vision (ECCV), 2014. 3, 5 [7] R. N. Jiyang Gao, Zhenheng Yang, “Red: Reinforced encoder-decoder networks for action anticipation,” in BMVC, 2017. 3, 5 [8] C. Wu, J. Zhang, B. Selman, S. Savarese, and A. Saxena, “Watch-bot: Unsuper- vised learning for reminding humans of forgotten actions,” in 2016 IEEE Interna- tional Conference on Robotics and Automation (ICRA), 2016. 4, 6 [9] B. Soran, A. Farhadi, and L. Shapiro, “Generating notifications for missing ac- tions: Don’t forget to turn the lights off!,” in 2015 IEEE International Conference on Computer Vision (ICCV), 2015. 4, 6, 7 [10] T.-Y. Wu, T.-A. Chien, C.-S. Chan, C.-W. Hu, and M. Sun, “Anticipating daily intention using on-wrist motion triggered sensing,” in International Conference on Computer Vision (ICCV), 2017. 4, 7, 16, 17, 20, 21 [11] P.-X. X. C.-C. C. C.-S. Chan, S.-Z. Chen and M. Sun, “Recognition from hand cameras: A revisit with deep learning,” in 2016 European Conference on Com- puter Vision (ECCV), 2016. 4, 7
[12] A. K. K. Ohnishi, A. Kanehira and T. Harada, “Recognizing activities of daily living with a wrist-mounted camera,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 4, 7 [13] N. Rhinehart and K. M. Kitani, “First-person activity forecasting with online in- verse reinforcement learning,” in The IEEE International Conference on Computer Vision (ICCV), 2017. 5, 7 [14] T. Mahmud, M. Hasan, A. Chakraborty, and A. K. Roy-Chowdhury, “A poisson process model for activity forecasting,” in 2016 IEEE International Conference on Image Processing (ICIP), 2016. 5 [15] T. Mahmud, M. Hasan, and A. K. Roy-Chowdhury, “Joint prediction of activity labels and starting times in untrimmed videos,” in 2017 IEEE International Con- ference on Computer Vision (ICCV), 2017. 5 [16] R. D. Baruah, M. Singh, D. Baruah, and I. S. Misra, “Predicting activity occurrence time in smart homes with evolving fuzzy models,” in 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2017. 5 [17] B. Minor and D. Cook, “Forecasting occurrences of activities,” in Pervasive and Mobile Computing, 2016. 5 [18] B. D. Minor, J. R. Doppa, and D. J. Cook, “Learning activity predictors from sensor data: Algorithms, evaluation, and applications,” in IEEE Transactions on Knowl- edge and Data Engineering, 2017. 5 [19] “Casas dataset.” http://ailab.wsu.edu/casas/datasets.html. [Online; accessed 11-March-2018]. 5 [20] D. J. Patterson, D. Fox, H. Kautz, and M. Philipose, “Fine-grained activity recog- nition by aggregating abstract object usage,” in Proceedings of the Ninth IEEE International Symposium on Wearable Computers, ISWC ’05, (Washington, DC, USA), pp. 44–51, IEEE Computer Society, 2005. 6 [21] B. Logan, J. Healey, M. Philipose, E. M. Tapia, and S. Intille, “A long-term eval- uation of sensing modalities for activity recognition,” in Proceedings of the 9th International Conference on Ubiquitous Computing, UbiComp ’07, (Berlin, Hei- delberg), pp. 483–500, Springer-Verlag, 2007. 6 [22] J. Wu, A. Osuntogun, T. Choudhury, M. Philipose, and J. M. Rehg, “A scalable ap- proach to activity recognition based on object use,” 2007 IEEE 11th International Conference on Computer Vision, pp. 1–8, 2007. 6 [23] E. M. Tapia, S. S. Intille, and K. Larson, “Activity recognition in the home using simple and ubiquitous sensors.,” in Pervasive (A. Ferscha and F. Mattern, eds.), vol. 3001 of Lecture Notes in Computer Science, pp. 158–175, Springer, 2004. 6 [24] D. Fortin-Simard, J.-S. Bilodeau, K. Bouchard, S. Gaboury, B. Bouchard, and A. Bouzouane, “Exploiting passive rfid technology for activity recognition in smart homes,” vol. 0, pp. 1–8, 02 2015. 6
[25] A. Fathi, A. Farhadi, and J. M. Rehg, “Understanding egocentric activities,” in Proceedings of the 2011 International Conference on Computer Vision, ICCV ’11, 2011. 6 [26] Y. Li, Z. Ye, and J. M. Rehg, “Delving into egocentric actions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 6 [27] B. Soran, A. Farhadi, and L. Shapiro, Action recognition in the presence of one egocentric and multiple static cameras. 2015. 6 [28] S. Singh, C. Arora, and C. V. Jawahar, “First person action recognition using deep learned descriptors,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 6 [29] M. Ma, H. Fan, and K. M. Kitani, “Going deeper into first-person activity recog- nition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 6 [30] S. Z. Bokhari and K. M. Kitani, “Long-term activity forecasting using first-person vision,” in ACCV, 2016. 7 [31] G. Bertasius and J. Shi, “Using cross-model egosupervision to learn cooperative basketball intention,” in The IEEE International Conference on Computer Vision (ICCV) Workshops, 2017. 7 [32] B. Zhang, L. Wang, Z. Wang, Y. Qiao, and H. Wang, “Real-time action recognition with enhanced motion vector cnns,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 17 [33] C. Wu, M. Zaheer, H. Hu, R. Manmatha, A. J. Smola, and P. Krähenbühl, “Com- pressed video action recognition,” CoRR, vol. abs/1712.00636, 2017. 17
|