|
[1] C. O. Ancuti, C. Ancuti, and R. Timofte, “NH-HAZE: an image dehazing benchmark with non-homogeneous hazy and haze-free images,” in CVPRW, 2020. [2] C. O. Ancuti, C. Ancuti, F.-A. Vasluianu, and R. Timofte, “Ntire 2021 nonhomogeneous dehazing challenge report,” in CVPRW, 2021. [3] R. Wu, Z. Duan, C. Guo, Z. Chai, and C. Li, “Ridcp: Revitalizing real image dehazing via high-quality codebook priors,” in CVPR, 2023. [4] Y. Yang, C. Wang, R. Liu, L. Zhang, X. Guo, and D. Tao, “Self-augmented unpaired image dehazing via density and depth decomposition,” in CVPR, 2022. [5] C. O. Ancuti, C. Ancuti, R. Timofte, and C. D. Vleeschouwer, “O-haze: a dehazing benchmark with real hazy and haze-free outdoor images,” in CVPRW, 2018. [6] C. O. Ancuti, C. Ancuti, R. Timofte, and C. D. Vleeschouwer, “I-haze: a dehazing benchmark with real hazy and haze-free indoor images,” in ACIVS, 2018. [7] Y. Cui, W. Ren, X. Cao, and A. Knoll, “Focal network for image restoration,” in ICCV, 2023. [8] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking singleimage dehazing and beyond,” IEEE TIP, 2019. [9] C.-L. Guo, Q. Yan, S. Anwar, R. Cong, W. Ren, and C. Li, “Image dehazing transformer with transmission-aware 3d position embedding,” in CVPR, 2022. [10] M. Fu, H. Liu, Y. Yu, J. Chen, and K. Wang, “Dw-gan: A discrete wavelet transform gan for nonhomogeneous dehazing,” in CVPRW, 2021. [11] H. Wu, Y. Qu, S. Lin, J. Zhou, R. Qiao, Z. Zhang, Y. Xie, and L. Ma, “Contrastive learning for compact single image dehazing,” in CVPR, 2021. [12] Y. Song, Z. He, H. Qian, and X. Du, “Vision transformers for single image dehazing,” IEEE TIP, 2023. [13] X. Liu, Y. Ma, Z. Shi, and J. Chen, “Griddehazenet: Attention-based multi-scale network for image dehazing,” in ICCV, 2019. [14] Q. Deng, Z. Huang, C.-C. Tsai, and C.-W. Lin, “Hardgan: A haze-aware representation distillation gan for single image dehazing,” in ECCV, 2020. [15] H. Yu, N. Zheng, M. Zhou, J. Huang, Z. Xiao, and F. Zhao, “Frequency and spatial dual guidance for image dehazing,” in ECCV, 2022. [16] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in NeurIPS, 2017. [17] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in ICLR, 2021. [18] H. Chen, Y. Wang, T. Guo, C. Xu, Y. Deng, Z. Liu, S. Ma, C. Xu, C. Xu, and W. Gao, “Pre-trained image processing transformer,” in CVPR, 2021. [19] R. Ranftl, A. Bochkovskiy, and V. Koltun, “Vision transformers for dense prediction,” in ICCV, 2021. [20] X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in AAAI, 2020. [21] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in ICCV, 2021. [22] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in ICCV, 2017. [23] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” in NeurIPS (I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds.), 2017. [24] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley, “Least squares generative adversarial networks,” in ICCV, 2017. [25] A. Shoshan, N. Bhonker, I. Kviatkovsky, and G. Medioni, “Gan-control: Explicitly controllable gans,” in ICCV, 2021. [26] M. Kowalski, S. J. Garbin, V. Estellers, T. Baltrušaitis, M. Johnson, and J. Shotton, “Config: Controllable neural face image generation,” in ECCV, 2020. [27] S. Akash, V. Lazar, R. Chris, G. M. U., and S. Charles, “Veegan: Reducing mode collapse in gans using implicit variational learning,” in NeurIPS, 2017. [28] Q. Mao, H.-Y. Lee, H.-Y. Tseng, S. Ma, and M.-H. Yang, “Mode seeking generative adversarial networks for diverse image synthesis,” in CVPR, 2019. [29] H. Mu, H. Le, B. Yikai, R. Jian, X. Jin, and Y. Jian, “Ra-depth: Resolution adaptive selfsupervised monocular depth estimation,” in ECCV, 2022. [30] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” in CVPR, 2017. [31] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in ECCV, 2016. [32] Z. Lou, H. Xu, F. Mu, Y. Liu, X. Zhang, L. Shang, J. Li, B. Guan, Y. Li, and Y. H. Hu, “Simhaze: game engine simulated data for real-world dehazing,” arXiv preprint arXiv:2305.16481, 2023. [33] J. Gui, X. Cong, Y. Cao, W. Ren, J. Zhang, J. Zhang, J. Cao, and D. Tao, “A comprehensive survey and taxonomy on single image dehazing based on deep learning,” ACM Computing Surveys, 2023. [34] X. Zhang, H. Dong, J. Pan, C. Zhu, Y. Tai, C. Wang, J. Li, F. Huang, and F. Wang, “Learning to restore hazy video: A new real-world dataset and a new method,” in CVPR, 2021. [35] Z. Chen, Y. Wang, Y. Yang, and D. Liu, “Psd: Principled synthetic-to-real dehazing guided by physical priors,” in CVPR, 2021. [36] M. K. Othman and A. A. Abdulla, “Enhanced single image dehazing technique based on hsv color space,” UHDJST, 2022. [37] E. J. McCartney, “Optics of the atmosphere: scattering by molecules and particles,” New York, 1976. [38] S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE TPAMI, 2003. [39] S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in ICCV, 1999. [40] C. O. Ancuti, C. Ancuti, M. Sbert, and R. Timofte, “Dense haze: A benchmark for image dehazing with dense-haze and haze-free images,” in ICIP, 2019. [41] C. Li, C. Guo, J. Guo, P. Han, H. Fu, and R. Cong, “Pdr-net: Perception-inspired single image dehazing network with refinement,” IEEE TMM, 2019. [42] Y. Qu, Y. Chen, J. Huang, and Y. Xie, “Enhanced pix2pix dehazing network,” in CVPR, 2019. |