|
[1] N. Ahmed, J. Rafiq, and M. Islam. Enhanced human activity recognition based on smartphone sensor data using hybrid feature selection model. Sensors, 20(1):317:1– 317:19, 2020. [2] O. Amft and G. Troster. Recognition of dietary activity events using on-body sen- ¨ sors. Artificial Intelligence in Medicine, 42(2):121–136, 2008. [3] S. An and U. Ogras. Mars: mmwave-based assistive rehabilitation system for smart healthcare. ACM Transactions on Embedded Computing Systems, 20(5s):1–22, 2021. [4] D. Anguita, A. Ghio, L. Oneto, X. Parra Perez, and J. L. Reyes Ortiz. A public domain dataset for human activity recognition using smartphones. In Proc. of the International European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), pages 437–442, 2013. [5] S. Balli, E. Saugbacs, and M. Peker. Human activity recognition from smart watch sensor data using a hybrid of principal component analysis and random forest algorithm. Measurement and Control, 52(1-2):37–45, 2019. [6] F. Baradel, C. Wolf, and J. Mille. Human activity recognition with pose-driven attention to rgb. In Proc. of British Machine Vision Conference (BMVC), pages 1–14, 2018. [7] V. Bazarevsky, I. Grishchenko, K. Raveendran, T. Zhu, F. Zhang, and M. Grundmann. Blazepose: On-device real-time body pose tracking. arXiv preprint arXiv:2006.10204, 2020. [8] S. Bhalla, M. Goel, and R. Khurana. Imu2doppler: Cross-modal domain adaptation for doppler-based activity recognition using imu data. Proc. of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(4):145:1–145:20, 2021. [9] G. Bhat, N. Tran, H. Shill, and U. Ogras. w-har: An activity recognition dataset and framework using low-power wearable devices. Sensors, 20(18):5356, 2020.
[10] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In Proc. of the IEEE conference on computer vision and pattern recognition, pages 7291–7299, 2017. [11] J. Cheng, B. Zhou, K. Kunze, C. C. Rheinlander, S. Wille, N. Wehn, J. Weppner, and ¨ P. Lukowicz. Activity recognition and nutrition monitoring in every day situations with a textile capacitive neckband. In Proc. of the ACM Conference on Pervasive and Ubiquitous Computing (UbiComp), pages 155–158, 2013. Demo Paper. [12] Dfintech. Cisco visual networking index: Forecast and methodology, 2016-2021, 2022. [13] M. Farooq and E. Sazonov. A novel wearable device for food intake and physical activity recognition. Sensors, 16(7):1067, 2016. [14] M. Farooq and E. Sazonov. Accelerometer-based detection of food intake in freeliving individuals. IEEE Sensors Journal, 18(9):3752–3758, 2018. [15] R. Fisher, S. Blunsden, and E. Andrade. Behave: Computer-assisted prescreening of video streams for unusual activities, 2011. [16] R. Fisher, J. Santos-Victor, and J. Crowley. Caviar: Context aware vision using image-based active recognition, 2011. [17] A. Franco, A. Magnani, and D. Maio. A multimodal approach for human activity recognition based on skeleton and rgb data. Pattern Recognition Letters, 131:293– 299, 2020. [18] D. Garcia-Gonzalez, D. Rivero, E. Fernandez-Blanco, and M. Luaces. A public domain dataset for real-life human activity recognition using smartphone sensors. Sensors, 20(8):2200, 2020. [19] G. Gkioxari, B. Hariharan, R. Girshick, and J. Malik. Using k-poselets for detecting people and localizing their keypoints. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3582–3589, 2014. [20] P. Gong, C. Wang, and L. Zhang. Mmpoint-gnn: Graph neural network with dynamic edges for human activity recognition through a millimeter-wave radar. In Proc. of International Joint Conference on Neural Networks (IJCNN), pages 1–7, 2021. [21] L. Gorelick, M. Blank, E. Shechtman, M. Irani, and R. Basri. Actions as spacetime shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(12):2247–2253, 2007. [22] L. Guo, L. Wang, C. Lin, J. Liu, B. Lu, J. Fang, Z. Liu, Z. Shan, J. Yang, and S. Guo. Wiar: A public dataset for wifi-based activity recognition. IEEE Access, 7:154935–154945, 2019. [23] L. Harnack, L. Steffen, D. Arnett, S. Gao, and R. Luepker. Accuracy of estimation of large food portions. Journal of the American Dietetic Association, 104(5):804–806, 2004. [24] M. Hassan, M. Uddin, A. Mohamed, and A. Almogren. A robust human activity recognition system using smartphone sensors and deep learning. Future Generation Computer Systems, 81:307–313, 2018. [25] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [26] S. He, S. Li, A. Nag, S. Feng, T. Han, S. Mukhopadhyay, and W. Powel. A comprehensive review of the use of sensors for food intake detection. Sensors and Actuators A: Physical, 315:112318:1–112318:16, 2020. [27] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [28] J. Hu, W. Zheng, J. Lai, and J. Zhang. Jointly learning heterogeneous features for rgb-d activity recognition. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5344–5352, 2015. [29] Y. Huang, W. Li, Z. Dou, W. Zou, A. Zhang, and Z. Li. Activity recognition based on millimeter-wave radar by fusing point cloud and range–doppler information. Signals, 3(2):266–283, 2022. [30] A. Iosifidis, E. Marami, A. Tefas, and I. Pitas. Eating and drinking activity recognition based on discriminant analysis of fuzzy distances and activity volumes. In Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2201–2204, 2012. [31] A. Jain and V. Kanhangad. Human activity classification in smartphones using accelerometer and gyroscope sensors. IEEE Sensors Journal, 18(3):1169–1177, 2017. [32] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. [33] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. [34] H. Liu and T. Schultz. A wearable real-time human activity recognition system using biosensors integrated into a knee bandage. In Proc. of International Conference on Biomedical Electronics and Devices, pages 47–55, 2019. [35] A. Logacjov, K. Bach, A. Kongsvold, H. B. Bardstu, and P. J. Mork. Harth: A human ˚ activity recognition dataset for machine learning. Sensors, 21(23):7853, 2021. [36] S. Mekruksavanich and A. Jitpattanakul. Smartwatch-based human activity recognition using hybrid lstm network. pages 1–4, 2020. [37] D. Micucci, M. Mobilio, and P. Napoletano. Unimib shar: A dataset for human activity recognition using acceleration data from smartphones. Applied Sciences, 7(10):1101, 2017. [38] W. Min, S. Jiang, L. Liu, Y. Rui, and R. Jain. A survey on food computing. ACM Computing Surveys, 52(5):1–36, 2019. [39] W. Min, L. Liu, Z. Luo, and S. Jiang. Ingredient-guided cascaded multi-attention network for food recognition. In Proc. ACM International Conference on Multimedia (MM), pages 1331–1339, 2019. [40] A. Moin, A. Zhou, A. Rahimi, A. Menon, S. Benatti, G. Alexandrov, S. Tamakloe, J. Ting, N. Yamamoto, Y. Khan, et al. A wearable biosensing system with insensor adaptive machine learning for hand gesture recognition. Nature Electronics, 4(1):54–63, 2021. [41] G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler, and K. Murphy. Towards accurate multi-person pose estimation in the wild. In Proc. of the IEEE conference on computer vision and pattern recognition, pages 4903– 4911, 2017. [42] L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. V. Gehler, and B. Schiele. Deepcut: Joint subset partition and labeling for multi person pose estimation. In Proc. of the IEEE conference on computer vision and pattern recognition, pages 4929–4937, 2016 [43] J. Qi, G. Jiang, G. Li, Y. Sun, and B. Tao. Intelligent human-computer interaction based on surface emg gesture recognition. IEEE Access, 7:61378–61387, 2019. [44] N. Rashid, M. Dautta, P. Tseng, and M. Faruque. Hear: Fog-enabled energyaware online human eating activity recognition. IEEE Internet of Things Journal, 8(2):860–868, 2020. [45] A. Salehzadeh, A. Calitz, and J. Greyling. Human activity recognition using deep electroencephalography learning. Biomedical Signal Processing and Control, 62:102094, 2020. [46] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local svm approach. In Proc. of the International Conference on Pattern Recognition (ICPR)., pages III:32–III:36, 2004. [47] N. Selamat and S. Ali. Automatic food intake monitoring based on chewing activity: A survey. IEEE Access, 8:48846–48869, 2020. [48] A. Sengupta and S. Cao. mmpose-nlp: A natural language processing approach to precise skeletal pose estimation using mmwave radars. IEEE Transactions on Neural Networks and Learning Systems, 2022. [49] A. Sengupta, F. Jin, R. Zhang, and S. Cao. mm-pose: Real-time human skeletal posture estimation using mmwave radars and cnns. IEEE Sensors Journal, 20(17):10032–10044, 2020. [50] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proc. of the IEEE conference on computer vision and pattern recognition, pages 1010–1019, 2016. [51] L. Shi, Y. Zhang, J. Cheng, and H. Lu. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proc. of the IEEE/CVF conference on computer vision and pattern recognition, pages 12026–12035, 2019. [52] N. Sikder and A.-A. Nahid. Ku-har: An open dataset for heterogeneous human activity recognition. Pattern Recognition Letters, 146:46–54, 2021. [53] A. Singh, S. Sandha, L. Garcia, and M. Srivastava. Radhar: Human activity recognition from point clouds generated through a millimeter-wave radar. In Proc. of the ACM Workshop on Millimeter-wave Networks and Sensing Systems (mmNets), pages 51–56, 2019. [54] T. Singh and D. Vishwakarma. A deeply coupled convnet for human activity recognition using dynamic and rgb images. Neural Computing and Applications, 33(1):469–485, 2021. [55] A. Stisen, H. Blunck, S. Bhattacharya, T. Prentow, M. Kjaergaard, A. Dey, T. Sonne, and M. Jensen. Smart devices are different: Assessing and mitigatingmobile sensing heterogeneities for activity recognition. In Proc. of ACM Conference on Embedded Networked Sensor Systems (SenSys), pages 127–140, 2015. [56] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proc. of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015. [57] Texas Instrument. Iwr1443 data sheet, product information and support — ti.com, 2023. [58] Texas Instruments. Iwr1443boost evaluation module mmwave sensing solution - user’s guide, 2020. [59] K. Verma and B. Singh. Deep multi-model fusion for human activity recognition using evolutionary algorithms. International Journal of Interactive Multimedia & Artificial Intelligence, 7(2), 2021. [60] C. Wang, T. S. Kumar, W. De Raedt, G. Camps, H. Hallez, and B. Vanrumste. Eatradar: Continuous fine-grained eating gesture detection using fmcw radar and 3d temporal convolutional network. arXiv preprint arXiv:2211.04253, 2022. [61] C. Wang, Z. Lin, Y. Xie, X. Guo, Y. Ren, and Y. Chen. Wieat: Fine-grained devicefree eating monitoring leveraging wi-fi signals. pages 1–9, 2020. [62] K. Wang, Q. Wang, F. Xue, and W. Chen. 3d-skeleton estimation based on commodity millimeter wave radar. In 2020 IEEE 6th International Conference on Computer and Communications (ICCC), pages 1339–1343. IEEE, 2020. [63] Y. Wang, H. Liu, K. Cui, A. Zhou, W. Li, and H. Ma. m-activity: Accurate and real-time human activity recognition via millimeter wave radar. In ICASSP 2021- 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8298–8302, 2021. [64] G. Weiss, K. Yoneda, and T. Hayajneh. Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access, 7:133190–133202, 2019. [65] A. Wellnitz, J. Wolff, C. Haubelt, and T. Kirste. Fluid intake recognition using inertial sensors. In Proc. of international Workshop on Sensor-based Activity Recognition and Interaction (iWOAR), pages 1–7, 2019. [66] Z. Wharton, A. Behera, Y. Liu, and N. Bessis. Coarse temporal attention network (cta-net) for driver’s activity recognition. In Proc. of IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1279–1289, 2021. [67] Y.-H. Wu, Y. Chen, S. Shirmohammadi, and C.-H. Hsu. Ai-assisted food intake activity recognition using 3d mmwave radars. In Proc. of the ACM International Workshop on Multimedia Assisted Dietary Management (MADiMa), pages 81–89, 2022. [68] Y.-H. Wu, H.-C. Chiang, S. Shirmohammadi, and C.-H. Hsu. A dataset of food intake activities using sensors with heterogeneous privacy sensitivity levels. In Proc. of the 14th Conference on ACM Multimedia Systems, pages 416–422, 2023. [69] Y. Xie, R. Jiang, X. Guo, Y. Wang, J. Cheng, and Y. Chen. mmeat: Millimeter wave-enabled environment-invariant eating behavior monitoring. Smart Health, 23:10023:1–10023:8, 2022. [70] S. Yan, Y. Xiong, and D. Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In Proc. of the AAAI conference on artificial intelligence, volume 32, 2018. [71] K. Yatani and K. Truong. Bodyscope: a wearable acoustic sensor for activity recognition. In Proc. of ACM Conference on Ubiquitous Computing (UbiComp), pages 341–350, 2012. [72] L. Zelnik-Manor and M. Irani. Event-based analysis of video. In Proc. of Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages II:123–II:130, 2001. [73] L. Zhang. Github-radar-lab/ti mmwave rospkg, 2019. [74] M. Zhang and A. A. Sawchuk. Usc-had: A daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proc. of ACM Conference on Ubiquitous Computing (UbiComp), pages 1036–1043, 2012.
|