|
[1] J. von Kries, “Chromatic adaptation, festschrift der alberchtludwiguniversität,” 1902. [2] Y. Qian, K. Chen, J. Nikkanen, J.-K. Kämäräinen, and J. Matas, “Recurrent color constancy,” ICCV, 2017. [3] O. Sidorov, “Conditional gans for multi-illuminant color constancy: Revolution or yet another approach?” CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. [4] S. Bianco and C. Cusano, “Quasi-unsupervised color constancy,” CVPR, 2019. [5] J. Qiu, H. Xu, and Z. Ye, “Color constancy by reweighting image feature maps,” IEEE Transactions on Image Processing, 2020. [6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde- Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” NIPS, 2014. [7] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” arXiv preprint arXiv:1701.07875, 2017. [8] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, “Improved training of wasserstein gans,” NIPS, 2017. [9] J. Zhao, M. Mathieu, and Y. LeCun, “Energy-based generative adversarial network,” ICLR, 2017. [10] D. Berthelot, T. Schumm, and L. Metz, “Began: Boundary equilibrium generative adversarial networks,” arXiv:1703.10717, 2017. [11] C.-C. Chang, C. H. Lin, C.-R. Lee, D.-C. Juan, W. Wei, and H.-T. Chen, “Escaping from collapsing modes in a constrained space,” ECCV, 2018. [12] P. Das, A. S. Baslamisli, Y. Liu, S. Karaoglu, and T. Gevers, “Color constancy by gans: An experimental survey,” arXiv:1812.03085, 2018. [13] Y. Hu, B. Wang, and S. Lin, “Fc 4: Fully convolutional color constancy with confidence-weighted pooling,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4085–4094. [14] I. Tolstikhin, O. Bousquet, S. Gelly, and B. Schoelkopf, “Wasserstein auto-encoders,” arXiv:1711.01558, 2017. [15] Z. Zhang, R. Zhang, Z. Li, Y. Bengio, and L. Paull, “Perceptual generative autoencoders,” ICML, 2020. [16] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” ICLR, 2016. [17] Y. Pu, Z. Gan, R. Henao, X. Yuan, A. Stevens, and L. Carin, “Variational autoencoder for deep learning of images, labels and captions,” NIPS, 2016. [18] Y. Bengio, L. Yao, G. Alain, and P. Vincent, “Generalized denoising auto-encoders as generative models,” NIPS, 2013. [19] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and composing robust features with denoising autoencoders,” ICML, 2008. [20] D. Ji, J. Kwon, M. McFarland, and S. Savarese, “Deep view morphing,” IEEE Conference on Computer Vision and Pattern Recognition, 2017. [21] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, Selfattention generative adversarial networks,” arXiv:1805.08318v2, 2019. [22] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral normalization for generative adversarial networks,” ICLR, 2018. [23] P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp, “Bayesian color constancy revisited,” CVPR, 2008. [24] L. Shi and B. Funt., “Re-processed version of the gehler color constancy dataset of 568 images.” [Online]. Available: http://www.cs.sfu.ca/ colour/data/ [25] G. Hemrit, G. D. Finlayson, A. Gijsenij, P. Gehler, S. Bianco, B. Funt, M. Drew, and L. Shi, “Rehabilitating the colorchecker dataset for illuminant estimation,” arXiv:1805.12262, 2018. [26] N. Bani´c, K. Košˇcevi´c, and S. Lonˇcari´c, “Unsupervised learning for color constancy,” arXiv:1712.00436, 2017. [27] M. Heusel, H. Ramsauer, T. Unterthiner, and B. Nessler, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” NIPS, 2017. [28] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv:1412.6980, 2014. [29] G. Buchsbaum, “A spatial processor model for object colour perception,” Journal of the Franklin Institute, 1980. [30] D. H. Brainard and B. A. Wandell, “Analysis of the retinex theory of color vision,” JOSA A, 1986. [31] G. D. Finlayson and E. Trezzi, “Shades of gray and colour constancy,” Color Imaging Conference, 2004. [32] T. G. J. van de Weijer and A. Gijsenij, “Edge-based color constancy,” TIP, 2007.
|