帳號:guest(3.15.221.25)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):高 穎
作者(外文):Kao, Ying
論文名稱(中文):應用於遠端心率偵測的雙通道時間網路
論文名稱(外文):Siamese Temporal Network for Remote Heart Rate Estimation
指導教授(中文):許秋婷
指導教授(外文):Hsu, Chiou-Ting
口試委員(中文):陳煥宗
邵皓強
口試委員(外文):CHEN, HWANN-TZONG
Shao, Hao-Chiang
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:106062525
出版年(民國):108
畢業學年度:107
語文別:英文
論文頁數:23
中文關鍵詞:遠端心率偵測孿生網絡卷積類神經網路
外文關鍵詞:Remote heart rate estimationSiamese networkConvolutional neural network
相關次數:
  • 推薦推薦:0
  • 點閱點閱:329
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
  近年來,許多研究人員關注在遠端心率偵測問題上,使用遠端光電容積描記術從面部視頻中檢測心率。現有方法忽略了心跳的穩定性,導致預測出來的心跳在短時間內急劇變化,這違背了現實中人們的生理機制。 在本文中,我們提出了一種新的用於遠端心率偵測的暹羅神經網絡。 為了穩定訓練過程以及得到更穩定的心率,我們同時從兩個具有時間偏移的特徵中學習模型。 我們還提出了一個遠端光電容積描記訊號相似流,它使用循環互相關層從面部區域提取可靠且具有相同周期性的遠端光電容積描記信號。 在COHFACE數據集和PURE數據集的實驗結果表明,我們提出的方法與現有方法相比可以達到最佳的心率偵測效果。
In recent years, many researchers pay attention to remote heart rate estimation from facial videos, using Remote photoplethysmography (rPPG). Because existing methods ignore the stability of HR, cause the predicted HR value to dramatically change in a short time. This situation is inconsistent with the physiological mechanism of people. To tackle this problem, we propose to simultaneously learn the model from two temporally shifted spatial-temporal maps under the Siamese network. We also develop an rPPG-similarity stream which uses circular cross-correlation layer to extract reliable and periodic rPPG features from facial regions. Experimental results on the COHFACE dataset and the PURE dataset demonstrate that our proposed method achieves state-of-the-art performance for HR estimation.
中文摘要.................................I
Abstract...............................II
1.Introduction..........................1
2.Related Work..........................3
2.1 Remote Heart Rate Estimation........3
2.2 Siamese Network.....................4
3.Proposed Method.......................6
3.1 Motivation..........................6
3.2 Pre-Processing......................6
3.3 The rPPG-HR Network.................7
3.4 Baseline Siamese Network............8
3.5 Siamese HR Estimation Network......10
4.Experiments..........................13
4.1 Datasets...........................13
4.2 Evaluation Criteria................14
4.3 Implementation Details.............15
4.4 Results............................15
5.Conclusion...........................20
References.............................21


[1] G. Haan and V. Jeanne., “Robust pulse rate from chrominance based rppg,” IEEE Transactions on Biomed. Eng., vol. 60, no. 10, pp. 2878–2886, 2013.
[2] Y. Qiu, Y. Liu, J. Aeaga-Falconi, H. Dong, and A. Saddik, “EVM-CNN: Real-time contactless heart rate estimation from facial video,” IEEE Transactions on Multimedia, vol. 21, no. 7, pp. 1778-1787, 2018.
[3] W. Chen and D. McDuff. “Deepphys: Video-based physiological measurement using convolutional attention networks,” in Proc. ECCV, 2018.
[4] R. Spetlik, V. Franc, J. Cech, and J. Matas, “Visual heart rate estimation with convolutional neural network,” in Proc. BMVC, 2018.
[5] X. Niu1, X. Zhao, H. Han, A. Das, A. Dantcheva, S. Shan, and X. Chen, “Robust Remote Heart Rate Estimation from Face Utilizing Spatial-temporal Attention,” in Proc. AFGR, 2019.
[6] X. Li, J. Chen, G. Zhao, and M. Pietikainen, “Remote heart rate measurement from face videos under realistic situations,” in Proc. CVPR, 2014.
[7] S. Tulyakov, X. Alameda-Pineda, E. Ricci, L. Yin, J. F. Cohn, and N. Sebe, “Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions,” in Proc. CVPR, 2016.
[8] P. Li, Y. Benezeth, K. Nakamura, R. Gomez, C. Li, and F. Yang : “Comparison of region of interest segmentation methods for video-based heart rate measurements”, in Proc. BIBE, 2018.
[9] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore,E. Sackinger, and R. Shah, “Signature verification using a Siamese time delay neural network,” in Proc. IJPRAI, 1993.

[10] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” in Proc. CVPR, pp.1701–1708, 2014.
[11] E. Ahmed, M. Jones, and T. K. Marks, “An improved deep learning architecture for person re-identification,” in Proc. CVPR, pp. 3908–3916, 2016.
[12] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr, “Fully-convolutional Siamese networks for object tracking,” in Proc. ECCV, pp. 850–865, 2016.
[13] Y. Zhang, M. Yu, N. Li, C. Yu, J. Cui, and D. Yu, “Seq2seq attentional Siamese neural networks for text-dependent speaker verification,” in Proc. ICASSP, 2019.
[14] S. H. Mohammadi and A. Kain, “Siamese autoencoders for speech style extraction and switching applied to voice identification and conversion,” in Proc. Interspeech, pp. 1293–1297, 2019.
[15] S. Kwon, J. Kim, D. Lee, and K. Park, “Roi analysis for remote photoplethysmography on facial video,” in Proc. EMBS, pp. 851– 862, 2015.
[16] A. Bulat and G. Tzimiropoulos, “How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks),” in Proc. ICCV, 2017.
[17] Z. Wang , “Exploiting Remote Photoplethysmography Features for Vision-based Heart Rate Estimation,” master's thesis, National Tsing Hua University, 2019.
[18] G. Heusch, A. Anjos, and S. Marcel. “A Reproducible Study on Remote Heart Rate Measurement.” In arXiv:1709.00962 [cs], 2017.
[19] R. Stricker, S. Müller, and H.-M. Gross, “Non-contact Video-based Pulse Rate Measurement on a Mobile Service Robot” in Proc. 23st IEEE Int. Symposium on Robot and Human Interactive Communication, Edinburgh, Scotland, UK, pp. 1056 - 1062, IEEE, 2014.
[20] W. Wang, S. Stuijk, and G. de Haan, “A novel algorithm for remote photoplethysmography: Spatial subspace rotation,” IEEE Transactions on Bio-medical Engineering, vol. 63, pp. 1974–1984, 2016.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *