帳號:guest(216.73.216.146)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):鄭敬儒
作者(外文):Cheng, Ching-Ju
論文名稱(中文):基於隱私保護和資料有效性的魚眼居家行為分析系統
論文名稱(外文):A Privacy Preserved Behavior Analysis System with Data Efficient Strategy via Fisheye Camera
指導教授(中文):孫民
指導教授(外文):Sun, Min
口試委員(中文):張永儒
黃敬群
口試委員(外文):Chang, Yung-Ju
Huang, Ching-Chun
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:107061532
出版年(民國):109
畢業學年度:108
語文別:中文
論文頁數:53
中文關鍵詞:行為分析居家照護隱私保護電腦視覺資料有效性
外文關鍵詞:Behavior AnalysisHome CarePrivacy PreservingComputer VisionData Efficient
相關次數:
  • 推薦推薦:0
  • 點閱點閱:425
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
老年人口的攀升、獨居老人與老夫妻共居的情況增加,造成年長者的子女不了解雙親的身心和生活狀況,無法予以改善建議,發生緊急狀況時也不能及時處理。若能知道年邁父母的生活情形,便能給予建議,有機會延緩老化的發生,緊急事件亦能及時通報避免意外發生,因此許多相關研究開始分析家中長者的行為,但多數研究都有缺點,包括應用受限、缺乏隱私保護、影像視野狹隘、未考量實際運算資源等等。本系統提出的架構,使用廣角魚眼拍攝影像,接著對影像做隱私保護處理,並從影像中辨識人的邊框、動作來實現生活分析,亦將隱私保護圖上傳至雲端,讓系統持續學習。本系統特色包括:廣泛視野,辨識範圍不受限;保護使用者隱私資訊;判別人的動作,並進一步分析生活作息;考量實際應用的運算資源,設計系統為邊緣計算,而非需要耗費網路與金錢的雲端計算和家用電腦計算;考量系統在不同環境的準確性,本系統提出採樣和自動化標註方法,可以有效運用資料且降低標註的人力與時間,並使系統能在不同環境中快速變準確。期許能在高齡化社會下,使兒女能夠了解父母生活狀況,適時的予以關心和建議,讓年長者維持良好的身心健康。
The rise of the elderly population is increasing. Besides, the elderly living alone and elderly couples cohabitation have caused the children of the elderly do not understand the physical, mental and living conditions of their parents. So they cannot provide suggestions for the life improvement. Besides, they are unable to know and assist their parents in an emergency. If they can know the life of the elderly parents, they can give suggestions for parents live, and emergency events can be notified in time to avoid accidents. Therefore, many relevant studies have begun to analyze the behavior of the elderly in the home, but most studies have some shortcomings, including applications restrictions, lack of privacy protection, narrow image field of view, without consideration of actual computing resources and so on. The architecture proposed by our system has a wide field of vision, the recognition range is not limited. Besides, it can protect users' private information, identify actions, and further analyze of daily life. %not only just identify actions.
Considering the practical computing resources, our system is designed for edge computing instead of cloud computing or home computer computing which consume higher cost. At the same time, considering the accuracy of the system in different environments, we design a algorithm which can effectively use data to make the system become accurate in different environments quickly. This system hopes to enable children to understand the living conditions of their parents, give timely care and advice to maintain good physical and mental health.
誌謝 ii
摘要 iii
Abstract iv
1 緒論 1
2 相關文獻 4
2.1 照護系統................................................4
2.2 人體姿態檢測.............................................5
2.3 隠私保護.................................................5
3 資料集 7
3.1 蒐集方法.................................................7
3.1.1 蒐集工具...............................................7
3.1.2 蒐集過程...............................................7
3.2 資料庫規格...............................................7
4 研究方法 10
4.1 系統架構.................................................10
4.2 系統組成.................................................11
4.2.1 相機...................................................11
4.2.2 邊緣裝置...............................................12
4.2.3 雲端...................................................12
4.3 人體邊框與姿態檢測.........................................12
4.3.1 邊框檢測................................................12
4.3.2 姿態檢測................................................13
4.3.3 骨架....................................................14
4.4 隠私保護...................................................14
4.4.1 隠私保護感受.............................................14
4.4.2 隠私保護還原性...........................................15
4.4.3 可擴展性................................................15
4.5 動作辨識..................................................15
4.6 資料的有效性..............................................17
4.6.1 Active Sample..........................................17
4.6.2 Self-annotation........................................18
4.7 實做細節..................................................19
5 實臉 20
5.1 姿態與邊框檢測.............................................20
5.1.1 學習曲線.................................................20
5.1.2 穩定度...................................................21
5.1.3 運算時間.................................................22
5.2 隠私保護...................................................22
5.2.1 可學習性.................................................23
5.2.2 隠私與準度的平衡..........................................24
5.3 動作辨識...................................................27
5.4 資料的有效性...............................................28
5.4.1 Active Sample...........................................29
5.4.2 Self-annotation.........................................30
6 應用 32
6.1 位置分析...................................................32
6.1.1 空間.....................................................32
6.1.2 時間.....................................................33
6.2 動作分析...................................................34
6.2.1 時間.....................................................34
6.2.2 空間.....................................................34
6.3 異常狀況....................................................35
7 結論與未來展望 36
7.1 結論.......................................................36
7.2 未來展望....................................................36
8 附件 38
8.1 問卷.......................................................38
References 49
[1] “World population ageing 2019.” https://www.un.org/ en/development/desa/population/publications/pdf/ageing/ WorldPopulationAgeing2019-Highlights.pdf. Accessed: 2020. 1
[2] “Taiwan elderly popluation.” https://pop-proj.ndc.gov.tw/chart.aspx?c=10& uid=66&pid=60. Accessed: 2020. 1
[3] “Taiwan dependency ratio.” https://pop-proj.ndc.gov.tw/chart.aspx?c=11& uid=67&pid=60. Accessed: 2020. 1
[4] “Taiwan elderly live condition.” https://www.mohw.gov.tw/ dl-48636-de32ad67-19c8-46d6-b96c-8826f6039fcb.html. Accessed: 2020. 1
[5] Z. Wang, “A single rgb camera based gait analysis with a mobile tele-robot for healthcare,” 2020. 1, 4
[6] D. Xue, A. Sayana, E. Darke, K. Shen, J.-T. Hsieh, Z. Luo, L.-J. Li, N. Downing,
A. Milstein, and L. Fei-Fei, “Vision-based gait analysis for senior care,” 12 2018. 1, 4
[7] G. Kan, Y. Rolland, S. Andrieu, J. Bauer, O. Beauchet, M. Bonnefoy, M. Cesari,
L. Donini, S. Guyonnet, M. Inzitari, F. Nourhashemi, G. Onder, P. Ritz, A. Salvà,
M. Visser, and B. Vellas, “Gait speed at usual pace as a predictor of adverse out- comes in community-dwelling older people an international academy on nutrition and aging (iana) task force,” The Journal of Nutrition Health and Aging, vol. 13, pp. 881–889, 12 2010. 1, 4
[8] S. Studenski, S. Perera, K. Patel, C. Rosano, K. Faulkner, M. Inzitari, J. Brach,
J. Chandler, P. Cawthon, E. B. Connor, M. Nevitt, M. Visser, S. Kritchevsky,
S. Badinelli, T. Harris, A. B. Newman, J. Cauley, L. Ferrucci, and J. Guralnik, “Gait Speed and Survival in Older Adults,” JAMA, vol. 305, pp. 50–58, 01 2011. 1, 4, 6
[9] J. Nogas, S. S. Khan, and A. Mihailidis, “Deepfall - non-invasive fall detection with deep spatio-temporal convolutional autoencoders,” ArXiv, vol. abs/1809.00977, 2018. 1, 4
[10] V. Mehta, A. Dhall, S. Pal, and S. Khan, “Motion and region aware adversarial learning for fall detection with thermal imaging,” 2020. 1, 4

[11] T.-D. H. Nguyen and H.-N. H. Nguyen, “Towards a robust wifi-based fall detection with adversarial data augmentation,” 2020 54th Annual Conference on Information Sciences and Systems (CISS), Mar 2020. 1, 4
[12] J.-L. Chua, Y. Chang, and W. Lim, “A simple vision-based fall detection technique for indoor video surveillance,” Signal, Image and Video Processing, vol. 9, 03 2013. 1, 4
[13] G. Mastorakis and D. Makris, “'s infrared sensor,” Journal of Real-Time Image Processing, vol. 9, 12 2014. 1, 4
[14] C. Rougier, E. Auvinet, J. Rousseau, M. Mignotte, and J. Meunier, “Fall detec- tion from depth map video sequences,” in Toward Useful Services for Elderly and People with Disabilities (B. Abdulrazak, S. Giroux, B. Bouchard, H. Pigot, and
M. Mokhtari, eds.), (Berlin, Heidelberg), pp. 121–128, Springer Berlin Heidel- berg, 2011. 1, 4
[15] C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, “Robust video surveil- lance for fall detection based on human shape deformation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 5, pp. 611–622, 2011. 1, 4
[16] J. Jang, D. Kim, C. Park, M. Jang, J. Lee, and J. Kim, “Etri-activity3d: A large- scale rgb-d dataset for robots to recognize daily activities of the elderly,” 2020. 1, 2, 4, 5
[17] M. Parajuli, Dat Tran, Wanli Ma, and D. Sharma, “Senior health monitoring using kinect,” in 2012 Fourth International Conference on Communications and Elec- tronics (ICCE), pp. 309–312, 2012. 1, 2, 4, 5
[18] P. A. Dias, D. Malafronte, H. Medeiros, and F. Odone, “Gaze estimation for as- sisted living environments,” 2019. 1, 2, 4
[19] C.-Y. Yang, H. Yun, S. Varadaraj, and J. Y. jen Hsu, “A mobile robot generating video summaries of seniors’ indoor activities,” 2019. 1, 2, 3, 4, 5
[20] Z. Luo, J.-T. Hsieh, N. Balachandar, S. Yeung, G. Pusiol, J. Luxenberg, G. Li, L.-J. Li, N. Downing, A. Milstein, and L. Fei-Fei, “Computer vision-based descriptive analytics of seniors’ daily activities for long-term health monitoring,” 08 2018. 1, 2, 3, 4, 5
[21] Z. Luo, A. Rege, G. Pusiol, A. Milstein, F. Li, and N. L. Downing, “Computer vision-based approach to maintain independent living for seniors,” in AMIA, 2017. 1, 2, 4, 5
[22] R. G. Guendel, “Radar classification of contiguous activities of daily living,” 2019. 1, 2, 4, 5
[23] K. Ohnishi, A. Kanehira, A. Kanezaki, and T. Harada, “Recognizing activities of daily living with a wrist-mounted camera,” 2015. 1, 2, 4, 5

[24] Hong Cheng, Z. Liu, Yang Zhao, and Guo Ye, “Real world activity summary for senior home monitoring,” in 2011 IEEE International Conference on Multimedia and Expo, pp. 1–4, 2011. 1, 2, 4, 5
[25] M. Wang, J. Tighe, and D. Modolo, “Combining detection and tracking for human pose estimation in videos,” 2020. 5
[26] G. Ning and H. Huang, “Lighttrack: A generic framework for online top-down human pose tracking,” 2019. 5
[27] G. Ning, P. Liu, X. Fan, and C. Zhang, “A top-down approach to articulated human pose estimation and tracking,” 2019. 5
[28] B. Xiao, H. Wu, and Y. Wei, “Simple baselines for human pose estimation and tracking,” 2018. 5
[29] G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler, and
K. Murphy, “Towards accurate multi-person pose estimation in the wild,” 2017. 5
[30] Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun, “Cascaded pyramid network for multi-person pose estimation,” 2017. 5
[31] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” 2017. 5
[32] Z. Tian, H. Chen, and C. Shen, “Directpose: Direct end-to-end multi-person pose estimation,” 2019. 5
[33] M. Li, Z. Zhou, J. Li, and X. Liu, “Bottom-up pose estimation of multiple person with bounding box constraint,” 2018. 5
[34] X. Nie, J. Zhang, S. Yan, and J. Feng, “Single-stage multi-person pose machines,” 2019. 5
[35] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person 2d pose esti- mation using part affinity fields,” 2016. 5
[36] A. Newell, Z. Huang, and J. Deng, “Associative embedding: End-to-end learning for joint detection and grouping,” 2016. 5
[37] G. Papandreou, T. Zhu, L.-C. Chen, S. Gidaris, J. Tompson, and K. Murphy, “Per- sonlab: Person pose estimation and instance segmentation with a bottom-up, part- based, geometric embedding model,” 2018. 5
[38] E. Chou, M. Tan, C. Zou, M. Guo, A. Haque, A. Milstein, and L. Fei-Fei, “Privacy- preserving action recognition for smart hospitals using low-resolution depth im- ages,” 2018. 5, 6
[39] V. Srivastav, A. Gangi, and N. Padoy, “Human pose estimation on privacy- preserving low-resolution depth images,” in Medical Image Computing and Com- puter Assisted Intervention – MICCAI 2019 (D. Shen, T. Liu, T. M. Peters, L. H. Staib, C. Essert, S. Zhou, P.-T. Yap, and A. Khan, eds.), (Cham), pp. 583–591, Springer International Publishing, 2019. 6

[40] W. Sirichotedumrong, T. Maekawa, Y. Kinoshita, and H. Kiya, “Privacy- preserving deep neural networks with pixel-based image encryption considering data augmentation in the encrypted domain,” 2019. 6
[41] D. Purwanto, R. Renanda Adhi Pramono, Y. Chen, and W. Fang, “Extreme low resolution action recognition with spatial-temporal multi-head self-attention and knowledge distillation,” in 2019 IEEE/CVF International Conference on Com- puter Vision Workshop (ICCVW), pp. 961–969, 2019. 6
[42] M. S. Ryoo, B. Rothrock, C. Fleming, and H. J. Yang, “Privacy-preserving human activity recognition from extreme low resolution,” 2016. 6
[43] S. P. Mudunuri and S. Biswas, “Low resolution face recognition across variations in pose and illumination,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 5, pp. 1034–1040, 2016. 6
[44] J. Dai, J. Wu, B. Saghafi, J. Konrad, and P. Ishwar, “Towards privacy-preserving activity recognition using extremely low temporal and spatial resolution cameras,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 68–76, 2015. 6
[45] A. Li, J. Guo, H. Yang, and Y. Chen, “Deepobfuscator: Adversarial training frame- work for privacy-preserving image classification,” 2019. 6
[46] H. Wang, Z. Wu, Z. Wang, Z. Wang, and H. Jin, “Privacy-preserving deep visual recognition: An adversarial learning framework and a new dataset,” 2019. 6
[47] F. Pittaluga, S. J. Koppal, and A. Chakrabarti, “Learning privacy preserving en- codings through adversarial training,” 2018. 6
[48] M. Li, L. Lai, N. Suda, V. Chandra, and D. Z. Pan, “Privynet: A flexible framework for privacy-preserving deep neural network training,” 2017. 6
[49] P. Korshunov and T. Ebrahimi, “Using warping for privacy protection in video surveillance,” in 2013 18th International Conference on Digital Signal Processing (DSP), pp. 1–6, 2013. 6
[50] “A privacy-preserving deep learning approach for face recognition with edge com- puting,” in USENIX Workshop on Hot Topics in Edge Computing (HotEdge 18), (Boston, MA), USENIX Association, July 2018. 6
[51] Z. Ren, Y. J. Lee, and M. S. Ryoo, “Learning to anonymize faces for privacy pre- serving action detection,” 2018. 6
[52] O. Sarwar, B. Rinner, and A. Cavallaro, “A privacy-preserving filter for oblique face images based on adaptive hopping gaussian mixtures,” IEEE Access, vol. 7, pp. 142623–142639, 2019. 6
[53] F. Li, Z. Sun, A. Li, B. Niu, H. Li, and G. Cao, “Hideme: Privacy-preserving photo sharing on social networks,” in IEEE INFOCOM 2019 - IEEE Conference on Computer Communications, pp. 154–162, 2019. 6

[54] T. T. Nguyen, C. M. Nguyen, D. T. Nguyen, D. T. Nguyen, and S. Nahavandi, “Deep learning for deepfakes creation and detection,” 2019. 6
[55] V. Mirjalili, S. Raschka, and A. Ross, “Privacynet: Semi-adversarial networks for multi-attribute face privacy,” 2020. 6
[56] X. Zhou, D. Wang, and P. Krähenbühl, “Objects as points,” 2019. 12
[57] H. Law and J. Deng, “Cornernet: Detecting objects as paired keypoints,” 2018. 12
[58] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan,
X. Wang, W. Liu, and B. Xiao, “Deep high-resolution representation learning for visual recognition,” 2019. 14
[59] A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose estimation,” 2016. 14
[60] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recogni- tion,” 2015. 14
[61] P. Chao, C.-Y. Kao, Y.-S. Ruan, C.-H. Huang, and Y.-L. Lin, “Hardnet: A low memory traffic network,” 2019. 14
[62] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu,
R. Pang, V. Vasudevan, Q. V. Le, and H. Adam, “Searching for mobilenetv3,” 2019. 14
[63] M. Tan and Q. V. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” 2019. 14
[64] T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proceed- ings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, (New York, NY, USA), pp. 785–794, ACM, 2016. 16
[65] T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona,
D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft coco: Common objects in context,” 2014. 19
[66] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000. 19
[67] T. Nawaz, A. Berg, J. Ferryman, J. Ahlberg, and M. Felsberg, “Effective evalua- tion of privacy protection techniques in visible and thermal imagery,” Journal of Electronic Imaging, vol. 26, no. 5, pp. 1 – 16, 2017. 24
[68] S. Kreiss, L. Bertoni, and A. Alahi, “Pifpaf: Composite fields for human pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 26
(此全文未開放授權)
電子全文
中英文摘要
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *