|
[1] G. Haan and V. Jeanne., “Robust pulse rate from chrominance based rppg,” IEEE Transactions on Biomed. Eng., vol. 60, no. 10, pp. 2878–2886, 2013. [2] Y. Qiu, Y. Liu, J. Aeaga-Falconi, H. Dong, and A. Saddik, “EVM-CNN: Real-time contactless heart rate estimation from facial video,” IEEE Transactions on Multimedia, vol. 21, no. 7, pp. 1778-1787, 2018. [3] W. Chen and D. McDuff. “Deepphys: Video-based physiological measurement using convolutional attention networks,” in Proc. ECCV, 2018. [4] R. Spetlik, V. Franc, J. Cech, and J. Matas, “Visual heart rate estimation with convolutional neural network,” in Proc. BMVC, 2018. [5] X. Niu1, X. Zhao, H. Han, A. Das, A. Dantcheva, S. Shan, and X. Chen, “Robust Remote Heart Rate Estimation from Face Utilizing Spatial-temporal Attention,” in Proc. AFGR, 2019. [6] X. Li, J. Chen, G. Zhao, and M. Pietikainen, “Remote heart rate measurement from face videos under realistic situations,” in Proc. CVPR, 2014. [7] S. Tulyakov, X. Alameda-Pineda, E. Ricci, L. Yin, J. F. Cohn, and N. Sebe, “Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions,” in Proc. CVPR, 2016. [8] P. Li, Y. Benezeth, K. Nakamura, R. Gomez, C. Li, and F. Yang : “Comparison of region of interest segmentation methods for video-based heart rate measurements”, in Proc. BIBE, 2018. [9] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore,E. Sackinger, and R. Shah, “Signature verification using a Siamese time delay neural network,” in Proc. IJPRAI, 1993.
[10] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” in Proc. CVPR, pp.1701–1708, 2014. [11] E. Ahmed, M. Jones, and T. K. Marks, “An improved deep learning architecture for person re-identification,” in Proc. CVPR, pp. 3908–3916, 2016. [12] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr, “Fully-convolutional Siamese networks for object tracking,” in Proc. ECCV, pp. 850–865, 2016. [13] Y. Zhang, M. Yu, N. Li, C. Yu, J. Cui, and D. Yu, “Seq2seq attentional Siamese neural networks for text-dependent speaker verification,” in Proc. ICASSP, 2019. [14] S. H. Mohammadi and A. Kain, “Siamese autoencoders for speech style extraction and switching applied to voice identification and conversion,” in Proc. Interspeech, pp. 1293–1297, 2019. [15] S. Kwon, J. Kim, D. Lee, and K. Park, “Roi analysis for remote photoplethysmography on facial video,” in Proc. EMBS, pp. 851– 862, 2015. [16] A. Bulat and G. Tzimiropoulos, “How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks),” in Proc. ICCV, 2017. [17] Z. Wang , “Exploiting Remote Photoplethysmography Features for Vision-based Heart Rate Estimation,” master's thesis, National Tsing Hua University, 2019. [18] G. Heusch, A. Anjos, and S. Marcel. “A Reproducible Study on Remote Heart Rate Measurement.” In arXiv:1709.00962 [cs], 2017. [19] R. Stricker, S. Müller, and H.-M. Gross, “Non-contact Video-based Pulse Rate Measurement on a Mobile Service Robot” in Proc. 23st IEEE Int. Symposium on Robot and Human Interactive Communication, Edinburgh, Scotland, UK, pp. 1056 - 1062, IEEE, 2014. [20] W. Wang, S. Stuijk, and G. de Haan, “A novel algorithm for remote photoplethysmography: Spatial subspace rotation,” IEEE Transactions on Bio-medical Engineering, vol. 63, pp. 1974–1984, 2016.
|