|
[1] A. Hyvarinen. Estimation of non-normalized statistical models by score matching, Journal of Machine Learning Research (JMLR), vol. 6, no. 24. pp. 695-709, 2005. [2] P. Vincent, A connection between score matching and denoising autoencoders, Neural computation, vol. 23, no. 7. pp. 1661-1674, 2011. (3] Y. Song, S. Garg, J. Shi, and S. Ermon, Sliced score matching: A scalable approach to density and score estimation, In Proc. of the Conf. on Uncertainty in Artificial Intelligence (UAI), p. 204. 2019. [4] Y. Song and S. Ermon, Generative modeling by estimating gradients of the data distribution, in Proc. of Conf. on Neural Information Processing Systems (NeurlPS). 2019. [5] Y. Song and S. Ermon, Improved techniques for training score-based generative models, In Proc. of Conf. on Neural Information Processing Systems (NeurlPS), 2020. [6] J. Ho, A. Jain, and P. Abbeel, Denoising diffusion probabilistic models, in Proc. of Conf. on Neural Information Processing Systems (NeurlPS), 2020. [7] J. Song, C. Meng, and S. Ermon, Denoising diffusion implicit models, in Proc. of the Int. Conf. on Machine Learning (ICML), 2021. [8] Y. Song, J. Sohl-Dickstein. D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, Score-based generative modeling through stochastic differential equations, In Int. Conf. on Learning Representations (ICLR), 2021. [9] P. Dhariwal and A. Nichol, Diffusion models beat GANs on image synthesis, arXiv preprint arXiv:2105.05233, 2021. [10] A. Nguyen, J. Clune, Y. Bengio, A. Dosovitskiy, and J. Yosinski, Plug & play generative networks: Conditional iterative generation of images in latent space, In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 4467-4477, 2017. [11] V. Jayaram and J. Thickstun, Source separation w让h deep generative priors, In Proc. of the Int. Conf. on Machine Learning (ICML), 2020. [12] J. Ho, C. S., W. Chan, D. Fleet. M. Norouzi, and T. Salimans. Cascaded diffusion models for high fidelity image generation, ArXiv, vol. abs/2106.15282, 2021. [13] G. O. Roberts and R. L. Tweedie, Exponential convergence of Langevin distributions and their discrete approximations, Bernoulli, vol. 2, no. 4, pp. 341-363, 1996. [14] G. O. Roberts and J. S. Rosenthal, Optimal scaling of discrete approximations to langevin diffusions, Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 60, no. 1. pp. 255-268. 1996. [15] E. Parzen, On estimation of a probability density function and mode, in The annals of mathematical statistics, vol. 33. pp. 1065-1076, JSTOR, 1962. [16] M. Welling and Y. W. Teh, “Bayesian learning via stochastic gradient langevin dynamics,” in Proc. of the Int. Conf. on Machine Learning (ICML), pp. 681– 688, Citeseer, 2011. [17] T. Kynkaanniemi, T. Karras, S. Laine, J. Lehtinen, and T. Aila, Improved precision and recall metric for assessing generative models, In Proc. of Conf. on Neural Information Processing Systems (NeurIPS), 2019. [18] K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, In Proc. of the Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016. [19] S. Barratt and R. Sharma, A note on the inception score, In arXiv preprint arXiv:1801.01973, 2018. [20] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, GANs trained by a two time-scale update rule converge to a local nash equilibrium, In Proc. of Conf. on Neural Information Processing Systems (NeurIPS), vol. 30, 2017. [21] M. F. Naeem, S. J. Oh, Y. Uh, Y. Choi, and J. Yoo, Reliable fidelity and diversity metrics for generative models, In Proc. of the Int. Conf. on Machine Learning (ICML), 2020. [22] S. Ravuri and O. Vinyals, Classification accuracy score for conditional generative models, in Proc. of Conf. on Neural Information Processing Systems (NeurIPS), 2019.
|