帳號:guest(3.22.61.103)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):王智寬
作者(外文):Wang, Zhi-Kuan
論文名稱(中文):利用遠程光體積變化描記圖之特徵達成基於視覺之心率偵測
論文名稱(外文):Exploiting Remote Photoplethysmography Features for Vision-based Heart Rate Estimation
指導教授(中文):許秋婷
指導教授(外文):Hsu, Chiou-Ting
口試委員(中文):邵皓強
陳煥宗
口試委員(外文):Shao, Hao-Chiang
Chen, Hwann-Tzong
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:106062601
出版年(民國):108
畢業學年度:107
語文別:英文
論文頁數:25
中文關鍵詞:遠程心率偵測低秩遠程光體積變化描記圖
外文關鍵詞:Remote heart rate estimationLow-rankRemote photoplethysmography
相關次數:
  • 推薦推薦:0
  • 點閱點閱:507
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
遠程光體積變化描記圖法 (rPPG)是一種藉由人臉影片的非接觸式遠程心律偵測的方法。rPPG信號的峰值信息和頻率對心率 (HR)預測至關重要。在本篇論文中,我們專注於開發一個統一的神經網絡,以充分利用rPPG信息進行遠程心率偵測。我們提出rPPG-HR網路,該網路可以生成穩固的rPPG信息,並且同時充分利用rPPG信號來估算HR。首先,我們生成時間-空間圖,捕捉不同區域中皮膚像素的顏色變化。接下來,由於面部的不同區域應該在時間上與rPPG信號一致,我們鼓勵網絡通過採用低秩約束來學習穩健的特徵表示。最後,我們從時間-空間圖中提取rPPG信號,並直接利用信號獲取HR信息。我們提出的rPPG-HR網絡是第一個同時預測rPPG和HR的模型。在COHFACE資料庫和PURE資料庫上的實驗結果顯示,我們提出的方法得到了現階段在該資料庫上最好的結果。
Remote photoplethysmography (rPPG) is a non-contact method for heart rate (HR) estimation from facial videos. Peak information and frequency of rPPG signals are crucial to HR prediction. In this thesis, we focus on developing a unified neural network to fully exploit rPPG information for remote HR estimation. We propose a novel rPPG-HR network which generates robust rPPG information while fully utilizing the rPPG signals to estimate HR. First, we generate spatial-temporal maps, which capture the color changes of skin pixels in different regions. Next, since different regions of the face should be temporally consistent for rPPG signals, we encourage the network to learn a robust feature representation by adopting a low-rank constraint. Finally, we extract rPPG signals from the maps and directly exploit the signals for HR information. Our proposed rPPG-HR network is the first model which simultaneously predicts both rPPG and HR. Experimental results on the COHFACE and PURE dataset show that our proposed method achieves state-of-the-art performance for HR estimation.
中文摘要 I
Abstract II
1. Introduction 1
2. Related Work 4
2.1 Traditional methods 4
2.2 Learning-based methods 5
3. Proposed Method 8
3.1 Motivation 8
3.2 Preprocessing 8
3.3 Baseline Model 9
3.4 Feature Extraction under Low-rank Constraint 11
3.5 rPPG-HR network 12
4. Experiments 14
4.1 Datasets 14
4.2 Evaluation Criteria 16
4.3 Implementation Details 16
4.4 Ablation Study 17
4.5 Results and Discussion 18
5. Conclusion 22
References 23

[1] W. Verkruysse, L. O. Svaasand, and J. S. Nelson, “Remote plethysmographic imaging using ambient light,” Optics Express, p.16(26):2143421445, 2008.
[2] X. Chen, J. Cheng, R. Song, Y. Liu, R. Ward, and Z. J. Wang, “Video based heart rate measurement: Recent advances and future prospects,” IEEE Transactions on Instrumentation and Measurement, pp. 1–16, 2018.
[3] C. Wang, T. Pun, and G. Chanel, “A comparative survey of methods for remote heart rate detection from frontal face videos,” Frontiers in Bioengineering and Biotechnology, vol. 6, p. 33, 2018.
[4] Y. Qiu, Y. Liu, J. Aeaga-Falconi, H. Dong, and A. Saddik, “EVM-CNN: Real-time contactless heart rate estimation from facial video,” IEEE Transactions on Multimedia, vol. 21, no. 7, pp.1778–1787, 2018.
[5] W. Chen and D. McDuff, “DeepPhys: Video-based physiological measurement using convolutional attention networks,” in Proc. ECCV, 2018.
[6] R. Spetlik, V. Franc, J. Cech, and J. Matas, “Visual heart rate estimation with convolutional neural network,” in Proc. BMVC, 2018.
[7] X. Niu, H. Han, S. Shan, and X. Chen, “Synrhythm: Learning a deep heart rate estimator from general to specific,” in Proc. ICPR, 2018.
[8] W. Chen and D. McDuff, “DeepMag: Source specific motion magnification using gradient ascent,” arXiv:1808.03338v1, 2018.
[9] Z. Yu, X. Li, and G. Zhao, “Recovering remote photoplethysmograph signal from facial videos using spatio-temporal convolutional networks,” arXiv:1905.02419v1, 2019.
[10] G. Hsu, A. Ambikapathi, and M. Chen, “Deep learning with time-frequency representation for pulse estimation from facial videos,” in Proc. IJCB, 2017.
[11] Yang, X. Li, and B. Zhang, “Heart rate estimation from facial videos based on convolutional neural network,” in Proc. IC-NIDC, pp. 45–49, 2018
[12] Niu, X. Zhao, H. Han, A. Das, A. Dantcheva, S. Shan, and X. Chen, “Robust remote heart rate estimation from face utilizing spatial-temporal attention,” in Proc. IEEE FG 2019, pp. 1–8, 2019.
[13] G. d. Haan and V. Jeanne, “Robust pulse rate from chrominance-based rppg,” IEEE Transactions on Biomedical Engineering, vol. 60, pp. 2878–2886, 2013.
[14] S. Tulyakov, X. Pineda, E. Ricci, L. Yin, J. F. Cohn, and N. Sebe, “Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions,” in Proc. CVPR, 2016.
[15] Y. Lin and Y. Lin, “Step count and pulse rate detection based on the contactless image measurement method,” IEEE Transactions on Multimedia, vol. 20, no. 8, pp. 2223–2231, 2018.
[16] A. Bulat and G. Tzimiropoulos, “How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks),” in Proc. ICCV, 2017.
[17] D. Datcu, M. Cidota, S. Lukosch, and L. Rothkrantz, “Noncontact automatic heart rate analysis in visible spectrum by specific face regions,” In Proc. ACM ICPS, vol. 767, 2013.
[18] G. Heusch, A. Anjos, and S. Marcel, “A reproducible study on remote heart rate measurement,” arXiv:1709.00962, 2017.
[19] R. Stricker, S. Mller, and H. M. Gross, “Non-contact video-based pulse rate measurement on a mobile service robot,” The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp.1056–1062, 2014.
[20] W. Wang, S. Stuijk, and G. de Haan, “A novel algorithm for remote photoplethysmography: Spatial subspace rotation,” IEEE transactions on bio-medical engineering, vol. 63, pp. 1974–1984, 2016.
[21] X. Li, J. Chen, G. Zhao, and M. Pietikinen, “Remote heart rate measurement from face videos under realistic situations,” in Proc. CVPR, 2014.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *