|
[1] “World population ageing 2019.” https://www.un.org/ en/development/desa/population/publications/pdf/ageing/ WorldPopulationAgeing2019-Highlights.pdf. Accessed: 2020. 1 [2] “Taiwan elderly popluation.” https://pop-proj.ndc.gov.tw/chart.aspx?c=10& uid=66&pid=60. Accessed: 2020. 1 [3] “Taiwan dependency ratio.” https://pop-proj.ndc.gov.tw/chart.aspx?c=11& uid=67&pid=60. Accessed: 2020. 1 [4] “Taiwan elderly live condition.” https://www.mohw.gov.tw/ dl-48636-de32ad67-19c8-46d6-b96c-8826f6039fcb.html. Accessed: 2020. 1 [5] Z. Wang, “A single rgb camera based gait analysis with a mobile tele-robot for healthcare,” 2020. 1, 4 [6] D. Xue, A. Sayana, E. Darke, K. Shen, J.-T. Hsieh, Z. Luo, L.-J. Li, N. Downing, A. Milstein, and L. Fei-Fei, “Vision-based gait analysis for senior care,” 12 2018. 1, 4 [7] G. Kan, Y. Rolland, S. Andrieu, J. Bauer, O. Beauchet, M. Bonnefoy, M. Cesari, L. Donini, S. Guyonnet, M. Inzitari, F. Nourhashemi, G. Onder, P. Ritz, A. Salvà, M. Visser, and B. Vellas, “Gait speed at usual pace as a predictor of adverse out- comes in community-dwelling older people an international academy on nutrition and aging (iana) task force,” The Journal of Nutrition Health and Aging, vol. 13, pp. 881–889, 12 2010. 1, 4 [8] S. Studenski, S. Perera, K. Patel, C. Rosano, K. Faulkner, M. Inzitari, J. Brach, J. Chandler, P. Cawthon, E. B. Connor, M. Nevitt, M. Visser, S. Kritchevsky, S. Badinelli, T. Harris, A. B. Newman, J. Cauley, L. Ferrucci, and J. Guralnik, “Gait Speed and Survival in Older Adults,” JAMA, vol. 305, pp. 50–58, 01 2011. 1, 4, 6 [9] J. Nogas, S. S. Khan, and A. Mihailidis, “Deepfall - non-invasive fall detection with deep spatio-temporal convolutional autoencoders,” ArXiv, vol. abs/1809.00977, 2018. 1, 4 [10] V. Mehta, A. Dhall, S. Pal, and S. Khan, “Motion and region aware adversarial learning for fall detection with thermal imaging,” 2020. 1, 4
[11] T.-D. H. Nguyen and H.-N. H. Nguyen, “Towards a robust wifi-based fall detection with adversarial data augmentation,” 2020 54th Annual Conference on Information Sciences and Systems (CISS), Mar 2020. 1, 4 [12] J.-L. Chua, Y. Chang, and W. Lim, “A simple vision-based fall detection technique for indoor video surveillance,” Signal, Image and Video Processing, vol. 9, 03 2013. 1, 4 [13] G. Mastorakis and D. Makris, “'s infrared sensor,” Journal of Real-Time Image Processing, vol. 9, 12 2014. 1, 4 [14] C. Rougier, E. Auvinet, J. Rousseau, M. Mignotte, and J. Meunier, “Fall detec- tion from depth map video sequences,” in Toward Useful Services for Elderly and People with Disabilities (B. Abdulrazak, S. Giroux, B. Bouchard, H. Pigot, and M. Mokhtari, eds.), (Berlin, Heidelberg), pp. 121–128, Springer Berlin Heidel- berg, 2011. 1, 4 [15] C. Rougier, J. Meunier, A. St-Arnaud, and J. Rousseau, “Robust video surveil- lance for fall detection based on human shape deformation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 5, pp. 611–622, 2011. 1, 4 [16] J. Jang, D. Kim, C. Park, M. Jang, J. Lee, and J. Kim, “Etri-activity3d: A large- scale rgb-d dataset for robots to recognize daily activities of the elderly,” 2020. 1, 2, 4, 5 [17] M. Parajuli, Dat Tran, Wanli Ma, and D. Sharma, “Senior health monitoring using kinect,” in 2012 Fourth International Conference on Communications and Elec- tronics (ICCE), pp. 309–312, 2012. 1, 2, 4, 5 [18] P. A. Dias, D. Malafronte, H. Medeiros, and F. Odone, “Gaze estimation for as- sisted living environments,” 2019. 1, 2, 4 [19] C.-Y. Yang, H. Yun, S. Varadaraj, and J. Y. jen Hsu, “A mobile robot generating video summaries of seniors’ indoor activities,” 2019. 1, 2, 3, 4, 5 [20] Z. Luo, J.-T. Hsieh, N. Balachandar, S. Yeung, G. Pusiol, J. Luxenberg, G. Li, L.-J. Li, N. Downing, A. Milstein, and L. Fei-Fei, “Computer vision-based descriptive analytics of seniors’ daily activities for long-term health monitoring,” 08 2018. 1, 2, 3, 4, 5 [21] Z. Luo, A. Rege, G. Pusiol, A. Milstein, F. Li, and N. L. Downing, “Computer vision-based approach to maintain independent living for seniors,” in AMIA, 2017. 1, 2, 4, 5 [22] R. G. Guendel, “Radar classification of contiguous activities of daily living,” 2019. 1, 2, 4, 5 [23] K. Ohnishi, A. Kanehira, A. Kanezaki, and T. Harada, “Recognizing activities of daily living with a wrist-mounted camera,” 2015. 1, 2, 4, 5
[24] Hong Cheng, Z. Liu, Yang Zhao, and Guo Ye, “Real world activity summary for senior home monitoring,” in 2011 IEEE International Conference on Multimedia and Expo, pp. 1–4, 2011. 1, 2, 4, 5 [25] M. Wang, J. Tighe, and D. Modolo, “Combining detection and tracking for human pose estimation in videos,” 2020. 5 [26] G. Ning and H. Huang, “Lighttrack: A generic framework for online top-down human pose tracking,” 2019. 5 [27] G. Ning, P. Liu, X. Fan, and C. Zhang, “A top-down approach to articulated human pose estimation and tracking,” 2019. 5 [28] B. Xiao, H. Wu, and Y. Wei, “Simple baselines for human pose estimation and tracking,” 2018. 5 [29] G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler, and K. Murphy, “Towards accurate multi-person pose estimation in the wild,” 2017. 5 [30] Y. Chen, Z. Wang, Y. Peng, Z. Zhang, G. Yu, and J. Sun, “Cascaded pyramid network for multi-person pose estimation,” 2017. 5 [31] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” 2017. 5 [32] Z. Tian, H. Chen, and C. Shen, “Directpose: Direct end-to-end multi-person pose estimation,” 2019. 5 [33] M. Li, Z. Zhou, J. Li, and X. Liu, “Bottom-up pose estimation of multiple person with bounding box constraint,” 2018. 5 [34] X. Nie, J. Zhang, S. Yan, and J. Feng, “Single-stage multi-person pose machines,” 2019. 5 [35] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person 2d pose esti- mation using part affinity fields,” 2016. 5 [36] A. Newell, Z. Huang, and J. Deng, “Associative embedding: End-to-end learning for joint detection and grouping,” 2016. 5 [37] G. Papandreou, T. Zhu, L.-C. Chen, S. Gidaris, J. Tompson, and K. Murphy, “Per- sonlab: Person pose estimation and instance segmentation with a bottom-up, part- based, geometric embedding model,” 2018. 5 [38] E. Chou, M. Tan, C. Zou, M. Guo, A. Haque, A. Milstein, and L. Fei-Fei, “Privacy- preserving action recognition for smart hospitals using low-resolution depth im- ages,” 2018. 5, 6 [39] V. Srivastav, A. Gangi, and N. Padoy, “Human pose estimation on privacy- preserving low-resolution depth images,” in Medical Image Computing and Com- puter Assisted Intervention – MICCAI 2019 (D. Shen, T. Liu, T. M. Peters, L. H. Staib, C. Essert, S. Zhou, P.-T. Yap, and A. Khan, eds.), (Cham), pp. 583–591, Springer International Publishing, 2019. 6
[40] W. Sirichotedumrong, T. Maekawa, Y. Kinoshita, and H. Kiya, “Privacy- preserving deep neural networks with pixel-based image encryption considering data augmentation in the encrypted domain,” 2019. 6 [41] D. Purwanto, R. Renanda Adhi Pramono, Y. Chen, and W. Fang, “Extreme low resolution action recognition with spatial-temporal multi-head self-attention and knowledge distillation,” in 2019 IEEE/CVF International Conference on Com- puter Vision Workshop (ICCVW), pp. 961–969, 2019. 6 [42] M. S. Ryoo, B. Rothrock, C. Fleming, and H. J. Yang, “Privacy-preserving human activity recognition from extreme low resolution,” 2016. 6 [43] S. P. Mudunuri and S. Biswas, “Low resolution face recognition across variations in pose and illumination,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 5, pp. 1034–1040, 2016. 6 [44] J. Dai, J. Wu, B. Saghafi, J. Konrad, and P. Ishwar, “Towards privacy-preserving activity recognition using extremely low temporal and spatial resolution cameras,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 68–76, 2015. 6 [45] A. Li, J. Guo, H. Yang, and Y. Chen, “Deepobfuscator: Adversarial training frame- work for privacy-preserving image classification,” 2019. 6 [46] H. Wang, Z. Wu, Z. Wang, Z. Wang, and H. Jin, “Privacy-preserving deep visual recognition: An adversarial learning framework and a new dataset,” 2019. 6 [47] F. Pittaluga, S. J. Koppal, and A. Chakrabarti, “Learning privacy preserving en- codings through adversarial training,” 2018. 6 [48] M. Li, L. Lai, N. Suda, V. Chandra, and D. Z. Pan, “Privynet: A flexible framework for privacy-preserving deep neural network training,” 2017. 6 [49] P. Korshunov and T. Ebrahimi, “Using warping for privacy protection in video surveillance,” in 2013 18th International Conference on Digital Signal Processing (DSP), pp. 1–6, 2013. 6 [50] “A privacy-preserving deep learning approach for face recognition with edge com- puting,” in USENIX Workshop on Hot Topics in Edge Computing (HotEdge 18), (Boston, MA), USENIX Association, July 2018. 6 [51] Z. Ren, Y. J. Lee, and M. S. Ryoo, “Learning to anonymize faces for privacy pre- serving action detection,” 2018. 6 [52] O. Sarwar, B. Rinner, and A. Cavallaro, “A privacy-preserving filter for oblique face images based on adaptive hopping gaussian mixtures,” IEEE Access, vol. 7, pp. 142623–142639, 2019. 6 [53] F. Li, Z. Sun, A. Li, B. Niu, H. Li, and G. Cao, “Hideme: Privacy-preserving photo sharing on social networks,” in IEEE INFOCOM 2019 - IEEE Conference on Computer Communications, pp. 154–162, 2019. 6
[54] T. T. Nguyen, C. M. Nguyen, D. T. Nguyen, D. T. Nguyen, and S. Nahavandi, “Deep learning for deepfakes creation and detection,” 2019. 6 [55] V. Mirjalili, S. Raschka, and A. Ross, “Privacynet: Semi-adversarial networks for multi-attribute face privacy,” 2020. 6 [56] X. Zhou, D. Wang, and P. Krähenbühl, “Objects as points,” 2019. 12 [57] H. Law and J. Deng, “Cornernet: Detecting objects as paired keypoints,” 2018. 12 [58] J. Wang, K. Sun, T. Cheng, B. Jiang, C. Deng, Y. Zhao, D. Liu, Y. Mu, M. Tan, X. Wang, W. Liu, and B. Xiao, “Deep high-resolution representation learning for visual recognition,” 2019. 14 [59] A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose estimation,” 2016. 14 [60] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recogni- tion,” 2015. 14 [61] P. Chao, C.-Y. Kao, Y.-S. Ruan, C.-H. Huang, and Y.-L. Lin, “Hardnet: A low memory traffic network,” 2019. 14 [62] A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q. V. Le, and H. Adam, “Searching for mobilenetv3,” 2019. 14 [63] M. Tan and Q. V. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” 2019. 14 [64] T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proceed- ings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, (New York, NY, USA), pp. 785–794, ACM, 2016. 16 [65] T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft coco: Common objects in context,” 2014. 19 [66] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000. 19 [67] T. Nawaz, A. Berg, J. Ferryman, J. Ahlberg, and M. Felsberg, “Effective evalua- tion of privacy protection techniques in visible and thermal imagery,” Journal of Electronic Imaging, vol. 26, no. 5, pp. 1 – 16, 2017. 24 [68] S. Kreiss, L. Bertoni, and A. Alahi, “Pifpaf: Composite fields for human pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 26
|