帳號:guest(3.133.134.92)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):吳逸竑
作者(外文):Wu, Yi-Hung
論文名稱(中文):使用毫米波雷達進行隱私保護之人體進食行為辨識
論文名稱(外文):Food Intake Activity Recognition Based on Privacy-Preserving mmWave Radars
指導教授(中文):徐正炘
指導教授(外文):Hsu, Cheng-Hsin
口試委員(中文):黃敬群
黃俊穎
郭柏志
口試委員(外文):Huang, Ching-Chun
Huang, Chun-Ying
Kuo, Po-Chih
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:110062518
出版年(民國):112
畢業學年度:112
語文別:英文
論文頁數:64
中文關鍵詞:行為辨識機器學習毫米波雷達資料集人體骨架預測
外文關鍵詞:Human activity recognitionMachine learningmmWave radardatasetSkeleton pose estimation
相關次數:
  • 推薦推薦:0
  • 點閱點閱:32
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
本論文旨在探討使用毫米波雷達技術進行人類進食活動識別,並 提出了一個包含RGB攝影機、深度攝影機和毫米波雷達數據的公開資 料集,資料集含有24名參與者執行12種不同的桌邊活動,以不同隱私 敏感性程度的儀器收集。 我們提出了四種算法。首個算法FIA結合了 數個預處理技術(如體素化、邊界框構建和三線性插值)與CNN+Bi- LSTM神經網絡分類器,在全球設置下實現了91.49%的準確率,超 越了當時使用體素化方法的SOTA。為了解決記憶體和儲存空間效 率問題,我們引入了同樣為end-to-end的DPR算法,直接使用毫米波 點雲的特徵,將GPU內存使用減少了79.38%。DPR還節省了約90%的 磁碟空間。就準確性而言,在global設定下,DPR實現了95.59%的準 確率,留一法設定下準確率達到了所有演算法中最高的72.46%。 我 們還引入了一個預測骨骼特徵並將其用作分類器輸入的流程。我們 提出的SPE和SPE+模型在骨骼估計方面表現出色,採用了ResNet架 構,同樣使用了毫米波點雲的點坐標、訊號強度和速度作為輸入特 徵。SPE和SPE+模型超越了現有的MARS和mmPose-NLP模型,成為誤 差距離最小的模型。 在使用骨骼數據作為輸入的分類器方面,我們 直接修改了廣泛使用的ST-GCN和2s-AGCN模型。這些模型在global設 定下超越了DPR,實現了98.68%的準確率,但在留一法設置中僅達到 了57.87%的準確率。然而,當使用理想的骨骼(mediapipe pose)作為 輸入時,分類器達到了82.42%的準確率,突顯了GCN模型的潛力。 本 研究提出了創新的方法、算法和多樣化的數據集,實現了準確性、內 存效率和隱私考慮方面的重大進展。
This thesis explores the use of mmWave radar technology for recognizing human food intake activities, and introduces a dataset comprising data from an RGB camera, a depth camera, and mmWave radar, with 24 participants performing 12 food intake activities with heterogeneous privacy sensitivity sensors. Four algorithms are presented. FIA, the initial algorithm, combines several preprocessing techniques (voxelization, bounding box, and trilinear interpolation) with a CNN+Bi-LSTM neural network classifier, achieving 91.49% accuracy in the global setup, surpassing voxelization-based methods. To address memory and classification issues, DPR, another end-to-end algo- rithm, directly uses mmWave point cloud features, reducing GPU memory us- age by 79.38%. DPR also conserves 90% of disk space and achieves 95.59% accuracy globally, with the best accuracy of 72.46% in the leave-one-out setup. A pipeline for predicting skeleton features (SPE and SPE+) is intro- duced. These models outperform existing models like MARS and mmPose- NLP, boasting the smallest error distances. Modified ST-GCN and 2s-AGCN models achieve 98.68% accuracy in the global setup but 57.87% in the leave- one-out setup. However, utilizing the ideal skeleton (mediapipe pose) results in 82.42% accuracy, highlighting the GCN model’s potential. This research presents innovative approaches, algorithms, and a diverse dataset, with ad- vancements in accuracy, memory efficiency, and privacy considerations.
1 Introduction 1
2 Related Work 5
2.1 Food Intake Activity Recognition . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Skeletal Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Food Intake Activity Datasets . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.1 Coarse-Grained Activities with Rich-Media Sensors . . . . . . . 8
2.3.2 Food-Intake Activities with Wearable Sensors . . . . . . . . . . . 9
2.3.3 Food-intake Activities with mmWave Radars . . . . . . . . . . . 9
3 Background 11
3.1 Human Activity Recognition . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 In-situ Sensors & Radars . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Food Intake Activity Recognition . . . . . . . . . . . . . . . . . . . . . . 14
4 Problem Statement 16
5 Proposed Solutions 18
5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.2 Food Intake Activity (FIA) . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.3 Dynamic Point Cloud Recognizer (DPR) . . . . . . . . . . . . . . . . . 22
5.4 Skeletal Pose Estimator (SPE) . . . . . . . . . . . . . . . . . . . . . . . 23
5.4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.4.2 Proposed solution . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.5 Graph Convolution Network (GCN) . . . . . . . . . . . . . . . . . . . . 24
5.5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.5.2 Proposed solution . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6 Dataset 30
6.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.2 Dataset Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.3 Skeleton Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7 Evaluations with Global Models 35
7.1 FIA Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.1.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.1.2 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.2 DPR Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7.2.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7.2.2 Test Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7.3 SPE Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.3.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.3.2 Test Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.4 Graph Convolution Network (GCN) . . . . . . . . . . . . . . . . . . . . 43
7.4.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.4.2 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8 Evaluations with Leave-One-Out Models 46
8.1 FIA Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
8.1.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
8.1.2 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8.2 DPR Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.2.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.2.2 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.3 SPE algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.3.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.3.2 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.4 GCN algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.4.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.4.2 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9 Conclusions & Future Works 55
9.1 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
9.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Bibliography 58
[1] N. Ahmed, J. Rafiq, and M. Islam. Enhanced human activity recognition based on
smartphone sensor data using hybrid feature selection model. Sensors, 20(1):317:1–
317:19, 2020.
[2] O. Amft and G. Troster. Recognition of dietary activity events using on-body sen- ¨
sors. Artificial Intelligence in Medicine, 42(2):121–136, 2008.
[3] S. An and U. Ogras. Mars: mmwave-based assistive rehabilitation system for smart
healthcare. ACM Transactions on Embedded Computing Systems, 20(5s):1–22,
2021.
[4] D. Anguita, A. Ghio, L. Oneto, X. Parra Perez, and J. L. Reyes Ortiz. A public
domain dataset for human activity recognition using smartphones. In Proc. of the
International European Symposium on Artificial Neural Networks, Computational
Intelligence and Machine Learning (ESANN), pages 437–442, 2013.
[5] S. Balli, E. Saugbacs, and M. Peker. Human activity recognition from smart watch
sensor data using a hybrid of principal component analysis and random forest algorithm. Measurement and Control, 52(1-2):37–45, 2019.
[6] F. Baradel, C. Wolf, and J. Mille. Human activity recognition with pose-driven
attention to rgb. In Proc. of British Machine Vision Conference (BMVC), pages
1–14, 2018.
[7] V. Bazarevsky, I. Grishchenko, K. Raveendran, T. Zhu, F. Zhang, and M. Grundmann. Blazepose: On-device real-time body pose tracking. arXiv preprint
arXiv:2006.10204, 2020.
[8] S. Bhalla, M. Goel, and R. Khurana. Imu2doppler: Cross-modal domain adaptation
for doppler-based activity recognition using imu data. Proc. of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(4):145:1–145:20, 2021.
[9] G. Bhat, N. Tran, H. Shill, and U. Ogras. w-har: An activity recognition dataset and
framework using low-power wearable devices. Sensors, 20(18):5356, 2020.

[10] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In Proc. of the IEEE conference on computer vision
and pattern recognition, pages 7291–7299, 2017.
[11] J. Cheng, B. Zhou, K. Kunze, C. C. Rheinlander, S. Wille, N. Wehn, J. Weppner, and ¨
P. Lukowicz. Activity recognition and nutrition monitoring in every day situations
with a textile capacitive neckband. In Proc. of the ACM Conference on Pervasive
and Ubiquitous Computing (UbiComp), pages 155–158, 2013. Demo Paper.
[12] Dfintech. Cisco visual networking index: Forecast and methodology, 2016-2021,
2022.
[13] M. Farooq and E. Sazonov. A novel wearable device for food intake and physical
activity recognition. Sensors, 16(7):1067, 2016.
[14] M. Farooq and E. Sazonov. Accelerometer-based detection of food intake in freeliving individuals. IEEE Sensors Journal, 18(9):3752–3758, 2018.
[15] R. Fisher, S. Blunsden, and E. Andrade. Behave: Computer-assisted prescreening
of video streams for unusual activities, 2011.
[16] R. Fisher, J. Santos-Victor, and J. Crowley. Caviar: Context aware vision using
image-based active recognition, 2011.
[17] A. Franco, A. Magnani, and D. Maio. A multimodal approach for human activity
recognition based on skeleton and rgb data. Pattern Recognition Letters, 131:293–
299, 2020.
[18] D. Garcia-Gonzalez, D. Rivero, E. Fernandez-Blanco, and M. Luaces. A public
domain dataset for real-life human activity recognition using smartphone sensors.
Sensors, 20(8):2200, 2020.
[19] G. Gkioxari, B. Hariharan, R. Girshick, and J. Malik. Using k-poselets for detecting
people and localizing their keypoints. In Proc. of the IEEE Conference on Computer
Vision and Pattern Recognition, pages 3582–3589, 2014.
[20] P. Gong, C. Wang, and L. Zhang. Mmpoint-gnn: Graph neural network with dynamic edges for human activity recognition through a millimeter-wave radar. In
Proc. of International Joint Conference on Neural Networks (IJCNN), pages 1–7,
2021.
[21] L. Gorelick, M. Blank, E. Shechtman, M. Irani, and R. Basri. Actions as spacetime shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence,
29(12):2247–2253, 2007.
[22] L. Guo, L. Wang, C. Lin, J. Liu, B. Lu, J. Fang, Z. Liu, Z. Shan, J. Yang, and
S. Guo. Wiar: A public dataset for wifi-based activity recognition. IEEE Access,
7:154935–154945, 2019.
[23] L. Harnack, L. Steffen, D. Arnett, S. Gao, and R. Luepker. Accuracy of estimation of
large food portions. Journal of the American Dietetic Association, 104(5):804–806,
2004.
[24] M. Hassan, M. Uddin, A. Mohamed, and A. Almogren. A robust human activity
recognition system using smartphone sensors and deep learning. Future Generation
Computer Systems, 81:307–313, 2018.
[25] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition.
In Proc. of the IEEE conference on computer vision and pattern recognition, pages
770–778, 2016.
[26] S. He, S. Li, A. Nag, S. Feng, T. Han, S. Mukhopadhyay, and W. Powel. A comprehensive review of the use of sensors for food intake detection. Sensors and Actuators
A: Physical, 315:112318:1–112318:16, 2020.
[27] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation,
9(8):1735–1780, 1997.
[28] J. Hu, W. Zheng, J. Lai, and J. Zhang. Jointly learning heterogeneous features for
rgb-d activity recognition. In Proc. of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), pages 5344–5352, 2015.
[29] Y. Huang, W. Li, Z. Dou, W. Zou, A. Zhang, and Z. Li. Activity recognition based
on millimeter-wave radar by fusing point cloud and range–doppler information. Signals, 3(2):266–283, 2022.
[30] A. Iosifidis, E. Marami, A. Tefas, and I. Pitas. Eating and drinking activity recognition based on discriminant analysis of fuzzy distances and activity volumes. In
Proc. of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2201–2204, 2012.
[31] A. Jain and V. Kanhangad. Human activity classification in smartphones using accelerometer and gyroscope sensors. IEEE Sensors Journal, 18(3):1169–1177, 2017.
[32] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset.
arXiv preprint arXiv:1705.06950, 2017.
[33] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep
convolutional neural networks. Advances in neural information processing systems,
25, 2012.
[34] H. Liu and T. Schultz. A wearable real-time human activity recognition system using
biosensors integrated into a knee bandage. In Proc. of International Conference on
Biomedical Electronics and Devices, pages 47–55, 2019.
[35] A. Logacjov, K. Bach, A. Kongsvold, H. B. Bardstu, and P. J. Mork. Harth: A human ˚
activity recognition dataset for machine learning. Sensors, 21(23):7853, 2021.
[36] S. Mekruksavanich and A. Jitpattanakul. Smartwatch-based human activity recognition using hybrid lstm network. pages 1–4, 2020.
[37] D. Micucci, M. Mobilio, and P. Napoletano. Unimib shar: A dataset for human
activity recognition using acceleration data from smartphones. Applied Sciences,
7(10):1101, 2017.
[38] W. Min, S. Jiang, L. Liu, Y. Rui, and R. Jain. A survey on food computing. ACM
Computing Surveys, 52(5):1–36, 2019.
[39] W. Min, L. Liu, Z. Luo, and S. Jiang. Ingredient-guided cascaded multi-attention
network for food recognition. In Proc. ACM International Conference on Multimedia (MM), pages 1331–1339, 2019.
[40] A. Moin, A. Zhou, A. Rahimi, A. Menon, S. Benatti, G. Alexandrov, S. Tamakloe, J. Ting, N. Yamamoto, Y. Khan, et al. A wearable biosensing system with insensor adaptive machine learning for hand gesture recognition. Nature Electronics,
4(1):54–63, 2021.
[41] G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler, and
K. Murphy. Towards accurate multi-person pose estimation in the wild. In Proc.
of the IEEE conference on computer vision and pattern recognition, pages 4903–
4911, 2017.
[42] L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. V. Gehler, and
B. Schiele. Deepcut: Joint subset partition and labeling for multi person pose estimation. In Proc. of the IEEE conference on computer vision and pattern recognition,
pages 4929–4937, 2016
[43] J. Qi, G. Jiang, G. Li, Y. Sun, and B. Tao. Intelligent human-computer interaction
based on surface emg gesture recognition. IEEE Access, 7:61378–61387, 2019.
[44] N. Rashid, M. Dautta, P. Tseng, and M. Faruque. Hear: Fog-enabled energyaware online human eating activity recognition. IEEE Internet of Things Journal,
8(2):860–868, 2020.
[45] A. Salehzadeh, A. Calitz, and J. Greyling. Human activity recognition using
deep electroencephalography learning. Biomedical Signal Processing and Control,
62:102094, 2020.
[46] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local svm
approach. In Proc. of the International Conference on Pattern Recognition (ICPR).,
pages III:32–III:36, 2004.
[47] N. Selamat and S. Ali. Automatic food intake monitoring based on chewing activity:
A survey. IEEE Access, 8:48846–48869, 2020.
[48] A. Sengupta and S. Cao. mmpose-nlp: A natural language processing approach
to precise skeletal pose estimation using mmwave radars. IEEE Transactions on
Neural Networks and Learning Systems, 2022.
[49] A. Sengupta, F. Jin, R. Zhang, and S. Cao. mm-pose: Real-time human skeletal posture estimation using mmwave radars and cnns. IEEE Sensors Journal,
20(17):10032–10044, 2020.
[50] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang. Ntu rgb+ d: A large scale dataset for
3d human activity analysis. In Proc. of the IEEE conference on computer vision and
pattern recognition, pages 1010–1019, 2016.
[51] L. Shi, Y. Zhang, J. Cheng, and H. Lu. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proc. of the IEEE/CVF conference
on computer vision and pattern recognition, pages 12026–12035, 2019.
[52] N. Sikder and A.-A. Nahid. Ku-har: An open dataset for heterogeneous human
activity recognition. Pattern Recognition Letters, 146:46–54, 2021.
[53] A. Singh, S. Sandha, L. Garcia, and M. Srivastava. Radhar: Human activity recognition from point clouds generated through a millimeter-wave radar. In Proc. of
the ACM Workshop on Millimeter-wave Networks and Sensing Systems (mmNets),
pages 51–56, 2019.
[54] T. Singh and D. Vishwakarma. A deeply coupled convnet for human activity
recognition using dynamic and rgb images. Neural Computing and Applications,
33(1):469–485, 2021.
[55] A. Stisen, H. Blunck, S. Bhattacharya, T. Prentow, M. Kjaergaard, A. Dey, T. Sonne,
and M. Jensen. Smart devices are different: Assessing and mitigatingmobile sensing
heterogeneities for activity recognition. In Proc. of ACM Conference on Embedded
Networked Sensor Systems (SenSys), pages 127–140, 2015.
[56] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proc. of the IEEE
conference on computer vision and pattern recognition, pages 1–9, 2015.
[57] Texas Instrument. Iwr1443 data sheet, product information and support — ti.com,
2023.
[58] Texas Instruments. Iwr1443boost evaluation module mmwave sensing solution -
user’s guide, 2020.
[59] K. Verma and B. Singh. Deep multi-model fusion for human activity recognition
using evolutionary algorithms. International Journal of Interactive Multimedia &
Artificial Intelligence, 7(2), 2021.
[60] C. Wang, T. S. Kumar, W. De Raedt, G. Camps, H. Hallez, and B. Vanrumste. Eatradar: Continuous fine-grained eating gesture detection using fmcw radar and 3d
temporal convolutional network. arXiv preprint arXiv:2211.04253, 2022.
[61] C. Wang, Z. Lin, Y. Xie, X. Guo, Y. Ren, and Y. Chen. Wieat: Fine-grained devicefree eating monitoring leveraging wi-fi signals. pages 1–9, 2020.
[62] K. Wang, Q. Wang, F. Xue, and W. Chen. 3d-skeleton estimation based on commodity millimeter wave radar. In 2020 IEEE 6th International Conference on Computer
and Communications (ICCC), pages 1339–1343. IEEE, 2020.
[63] Y. Wang, H. Liu, K. Cui, A. Zhou, W. Li, and H. Ma. m-activity: Accurate and
real-time human activity recognition via millimeter wave radar. In ICASSP 2021-
2021 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pages 8298–8302, 2021.
[64] G. Weiss, K. Yoneda, and T. Hayajneh. Smartphone and smartwatch-based biometrics using activities of daily living. IEEE Access, 7:133190–133202, 2019.
[65] A. Wellnitz, J. Wolff, C. Haubelt, and T. Kirste. Fluid intake recognition using inertial sensors. In Proc. of international Workshop on Sensor-based Activity Recognition and Interaction (iWOAR), pages 1–7, 2019.
[66] Z. Wharton, A. Behera, Y. Liu, and N. Bessis. Coarse temporal attention network
(cta-net) for driver’s activity recognition. In Proc. of IEEE Winter Conference on
Applications of Computer Vision (WACV), pages 1279–1289, 2021.
[67] Y.-H. Wu, Y. Chen, S. Shirmohammadi, and C.-H. Hsu. Ai-assisted food intake
activity recognition using 3d mmwave radars. In Proc. of the ACM International
Workshop on Multimedia Assisted Dietary Management (MADiMa), pages 81–89,
2022.
[68] Y.-H. Wu, H.-C. Chiang, S. Shirmohammadi, and C.-H. Hsu. A dataset of food
intake activities using sensors with heterogeneous privacy sensitivity levels. In Proc.
of the 14th Conference on ACM Multimedia Systems, pages 416–422, 2023.
[69] Y. Xie, R. Jiang, X. Guo, Y. Wang, J. Cheng, and Y. Chen. mmeat: Millimeter wave-enabled environment-invariant eating behavior monitoring. Smart Health,
23:10023:1–10023:8, 2022.
[70] S. Yan, Y. Xiong, and D. Lin. Spatial temporal graph convolutional networks for
skeleton-based action recognition. In Proc. of the AAAI conference on artificial
intelligence, volume 32, 2018.
[71] K. Yatani and K. Truong. Bodyscope: a wearable acoustic sensor for activity recognition. In Proc. of ACM Conference on Ubiquitous Computing (UbiComp), pages
341–350, 2012.
[72] L. Zelnik-Manor and M. Irani. Event-based analysis of video. In Proc. of Computer
Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 2,
pages II:123–II:130, 2001.
[73] L. Zhang. Github-radar-lab/ti mmwave rospkg, 2019.
[74] M. Zhang and A. A. Sawchuk. Usc-had: A daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proc. of ACM Conference on Ubiquitous
Computing (UbiComp), pages 1036–1043, 2012.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *