|
[1] G. De Haan and V. Jeanne, “Robust pulse rate from chrominancebased rppg,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 10, pp. 2878–2886, 2013. [2] W. Verkruysse, L. O. Svaasand, and J. S. Nelson, “Remote plethysmographic imaging using ambient light.,” Optics express, vol. 16, no. 26, pp. 21434–21445, 2008. [3] M.Z. Poh, D. J. McDuff, and R. W. Picard, “Noncontact, automated cardiac pulse measurements using video imaging and blind source separation.,” Optics express, vol. 18, no. 10, pp. 10762–10774, 2010. [4] W. Wang, A. C. Den Brinker, S. Stuijk, and G. De Haan, Algorithmic principles of remote ppg,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 7, pp. 1479–1491, 2016. [5] X. Li, J. Chen, G. Zhao, and M. Pietikainen, “Remote heart rate measurement from face videos under realistic situations,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4264–4271, 2014. [6] W. Wang, S. Stuijk, and G. De Haan, “A novel algorithm for remote photoplethysmography: Spatial subspace rotation,” IEEE transactions on biomedical engineering, vol. 63, no. 9, pp. 1974–1984, 2015. [7] G. De Haan and V. Jeanne, “Robust pulse rate from chrominancebased rppg,” IEEE Transactions on Biomedical Engineering, vol. 60, no. 10, pp. 2878–2886, 2013. [8] M.Z. Poh, D. J. McDuff, and R. W. Picard, “Advancements in noncontact, multiparameter physiological measurements using a webcam,” IEEE transactions on biomedical engineering, vol. 58, no. 1, pp. 7–11, 2010. [9] R. Song, H. Chen, J. Cheng, C. Li, Y. Liu, and X. Chen, “Pulsegan: Learning to generate realistic pulse waveforms in remote photoplethysmography,” IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 5, pp. 1373–1384, 2021. [10] Y.Y. Tsou, Y.A. Lee, and C.T. Hsu, “Multitask learning for simultaneous video generation and remote photoplethysmography estimation,” in Proceedings of the Asian Conference on Computer Vision, 2020. [11] F. Bousefsaf, A. Pruski, and C. Maaoui, “3d convolutional neural networks for remote pulse rate measurement and mapping from facial video,” Applied Sciences, vol. 9, no. 20, p. 4364, 2019. [12] W. Chen and D. McDuff, “Deepphys: Videobased physiological measurement using convolutional attention networks,” in Proceedings of the European Conference on Computer Vision (ECCV), pp. 349–365, 2018. [13] E. Lee, E. Chen, and C.Y. Lee, “Metarppg: Remote heart rate estimation using a transductive metalearner,” in European Conference on Computer Vision, pp. 392–409, Springer, 2020. [14] H. Lu, H. Han, and S. K. Zhou, “Dualgan: Joint bvp and noise modeling for remote physiological measurement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12404–12413, 2021. [15] X. Niu, H. Han, S. Shan, and X. Chen, “Synrhythm: Learning a deep heart rate estimator from general to specific,” in 2018 24th International Conference on Pattern Recognition (ICPR), pp. 3580–3585, IEEE, 2018. [16] R. Špetlík, V. Franc, and J. Matas, “Visual heart rate estimation with convolutional neural network,” in Proceedings of the british machine vision conference, Newcastle, UK, pp. 3–6, 2018. [17] Y.Y. Tsou, Y.A. Lee, C.T. Hsu, and S.H. Chang, “Siameserppg network: Remote photoplethysmography signal estimation from face videos,” in Proceedings of the 35th annual ACM symposium on applied computing, pp. 2066–2073, 2020. [18] Z. Yu, W. Peng, X. Li, X. Hong, and G. Zhao, “Remote heart rate measurement from highly compressed facial videos: an endtoend deep learning solution with video enhancement,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 151–160, 2019. [19] Z. Yu, Y. Shen, J. Shi, H. Zhao, P. H. Torr, and G. Zhao, “Physformer: facial videobased physiological measurement with temporal difference transformer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4186–4196, 2022. [20] S. Bobbia, R. Macwan, Y. Benezeth, A. Mansouri, and J. Dubois, “Unsupervised skin tissue segmentation for remote photoplethysmography,” Pattern Recognition Letters, vol. 124, pp. 82–90, 2019. [21] G. Heusch, A. Anjos, and S. Marcel, “A reproducible study on remote heart rate measurement,” arXiv preprint arXiv:1709.00962, 2017. [22] R. Stricker, S. Müller, and H.M. Gross, “Noncontact videobased pulse rate measurement on a mobile service robot,” in The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp. 1056–1062, IEEE, 2014. [23] Y. Balaji, S. Sankaranarayanan, and R. Chellappa, “Metareg: Towards domain generalization using metaregularization,” Advances in neural information processing systems, vol. 31, 2018. [24] Y. Li, Y. Yang, W. Zhou, and T. Hospedales, “Featurecritic networks for heterogeneous domain generalization,” in International Conference on Machine Learning, pp. 3915–3924, PMLR, 2019. [25] Z. Wang, Y. Luo, R. Qiu, Z. Huang, and M. Baktashmotlagh, “Learning to diversify for single domain generalization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 834–843, 2021. [26] D. Li, Y. Yang, Y.Z. Song, and T. M. Hospedales, “Deeper, broader and artier domain generalization,” in Proceedings of the IEEE international conference on computer vision, pp. 5542–5550, 2017. [27] C. Lin, Z. Yuan, S. Zhao, P. Sun, C. Wang, and J. Cai, “Domaininvariant disentangled network for generalizable object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8771–8780, 2021. [28] X. Yue, Y. Zhang, S. Zhao, A. SangiovanniVincentelli, K. Keutzer, and B. Gong, “Domain randomization and pyramid consistency: Simulationtoreal generalization without accessing target domain data,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2100–2110, 2019. [29] K. Zhou, Y. Yang, T. Hospedales, and T. Xiang, “Deep domainadversarial image generation for domain generalisation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 13025–13032, 2020. [30] L. Li, K. Gao, J. Cao, Z. Huang, Y. Weng, X. Mi, Z. Yu, X. Li, and B. Xia, “Progressive domain expansion network for single domain generalization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 224–233, 2021. [31] S. Shankar, V. Piratla, S. Chakrabarti, S. Chaudhuri, P. Jyothi, and S. Sarawagi, “Generalizing across domains via crossgradient training,” arXiv preprint arXiv:1804.10745, 2018. [32] D. Kim, Y. Yoo, S. Park, J. Kim, and J. Lee, “Selfreg: Selfsupervised contrastive regularization for domain generalization,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9619–9628, 2021. [33] G. Wang, H. Han, S. Shan, and X. Chen, “Crossdomain face presentation attack detection via multidomain disentangled representation learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6678–6687, 2020. [34] S. Lee, S. Cho, and S. Im, “Dranet: Disentangling representation and adaptation networks for unsupervised crossdomain adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15252–15261, 2021. [35] X. Niu, Z. Yu, H. Han, X. Li, S. Shan, and G. Zhao, “Videobased remote physiological measurement via crossverified feature disentangling,” in European Conference on Computer Vision, pp. 295–310, Springer, 2020. [36] T. Karras, S. Laine, and T. Aila, “A stylebased generator architecture for generative adversarial networks,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401–4410, 2019. [37] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013. [38] Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in International conference on machine learning, pp. 1180–1189, PMLR, 2015. [39] X. Niu, H. Han, S. Shan, and X. Chen, “Viplhr: A multimodal database for pulse estimation from lessconstrained face video,” in Asian Conference on Computer Vision, pp. 562–576, Springer, 2018. [40] A. Bulat and G. Tzimiropoulos, “How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks),” in International Conference on Computer Vision, 2017. [41] S. Woo, J. Park, J.Y. Lee, and I. S. Kweon, “Cbam: Convolutional block attention module,” in Proceedings of the European conference on computer vision (ECCV), pp. 3–19, 2018. [42] D. McDuff and E. Blackford, “iphys: An open noncontact imagingbased physiological measurement toolbox,” in 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp. 6521–6524, IEEE, 2019. |