帳號:guest(3.145.151.7)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):張芷菱
作者(外文):Chang, Chih-Ling
論文名稱(中文):用於影像去霧的物理引導霧增強網路
論文名稱(外文):PANet: A Physics-guided Parametric Augmentation Net for Image Dehazing by Hazing
指導教授(中文):林嘉文
指導教授(外文):Lin, Chia-Wen
口試委員(中文):林彥宇
胡敏君
劉育綸
口試委員(外文):Lin, Yen-Yu
Hu, Min-Chun
Liu, Yu-Lun
學位類別:碩士
校院名稱:國立清華大學
系所名稱:通訊工程研究所
學號:110064556
出版年(民國):113
畢業學年度:112
語文別:英文
論文頁數:40
中文關鍵詞:影像去霧有霧影像增強
外文關鍵詞:Image DehazingHaze Augmentation
相關次數:
  • 推薦推薦:0
  • 點閱點閱:41
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
在實際有霧場景中的影像去霧任務一直面臨著挑戰。現有的影像去霧方法因合成和實際場景中的有霧影像之間存在著巨大的領域差距,降低了實際環境中去霧的性能。然而,也因為必須在相同條件下取得有霧影像和其成對的乾淨影像,因此收集用於訓練去霧模型的實際影像數據集具有挑戰性。在本文中,我們提出了一種基於物理引導的霧增強網路(PANet),它可以生成逼真的有霧影像和其成對的乾淨影像,以有效提升實際場景中的去霧性能。PANet包括一個Haze-to-Parmeter Mapper(HPM),將有霧影像映射到參數空間,以及一個Parameter-to-Haze Mapper(PHM),將重新採樣的霧參數映射回到有霧影像。在參數空間中,我們可以對個別的霧參數圖進行像素級的重新採樣,生成原始訓練集中未出現過且具有物理可解釋性的不同霧條件的多樣化有霧影像。我們的實驗結果表明,PANet可以升成多樣化且符合實際物理特性的有霧影像,豐富現有的霧影像基準,從而有效提升最先進的影像去霧模型的性能。
Image dehazing faces challenges when dealing with hazy images in real-world scenarios. A huge domain gap between synthetic and real-world haze images degrades dehazing performance in practical settings. However, collecting real-world image datasets for training dehazing models is challenging since both hazy and clean pairs must be captured under the same conditions. In this paper, we propose a Physics-guided Parametric Augmentation Network (PANet) that generates photo-realistic hazy and clean training pairs to effectively enhance real-world dehazing performance. PANet comprises a Haze-to-Parameter Mapper (HPM) to project hazy images into a parameter space and a Parameter-to-Haze Mapper (PHM) to map the resampled haze parameters back to hazy images. In the parameter space, we can pixel-wisely resample individual haze parameter maps to generate diverse hazy images with physically-explainable haze conditions unseen in the training set. Our experimental results demonstrate that PANet can augment diverse realistic hazy images to enrich existing hazy image benchmarks so as to effectively boost the performances of state-of-the-art image dehazing models.
摘要 i
Abstract ii
1 Introduction 1
2 Related Work 7
2.1 Image Dehazing . . . . . . . . . . 7
2.2 Hazy Image Augmentation . . . . . . 8
3 Proposed Method 10
3.1 Overview . . . . . . . . . . . . . 10
3.2 Haze-to-Parameter Mapper (HPM) . . 11
3.3 Parameter-to-Haze Mapper (PHM) . . 13
3.4 Loss Function . . . . . . . . . . 15
3.5 Haze Augmentation Process . . . . 15
4 Experiments 17
4.1 Implementation Details . . . . . . 17
4.2 Performance Evaluations . . . . . 18
4.3 Ablation studies . . . . . . . . . 20
5 Conclusion 36
References 38
[1] C. O. Ancuti, C. Ancuti, and R. Timofte, “NH-HAZE: an image dehazing benchmark with non-homogeneous hazy and haze-free images,” in CVPRW, 2020.
[2] C. O. Ancuti, C. Ancuti, F.-A. Vasluianu, and R. Timofte, “Ntire 2021 nonhomogeneous dehazing challenge report,” in CVPRW, 2021.
[3] R. Wu, Z. Duan, C. Guo, Z. Chai, and C. Li, “Ridcp: Revitalizing real image dehazing via high-quality codebook priors,” in CVPR, 2023.
[4] Y. Yang, C. Wang, R. Liu, L. Zhang, X. Guo, and D. Tao, “Self-augmented unpaired image dehazing via density and depth decomposition,” in CVPR, 2022.
[5] C. O. Ancuti, C. Ancuti, R. Timofte, and C. D. Vleeschouwer, “O-haze: a dehazing benchmark with real hazy and haze-free outdoor images,” in CVPRW, 2018.
[6] C. O. Ancuti, C. Ancuti, R. Timofte, and C. D. Vleeschouwer, “I-haze: a dehazing benchmark with real hazy and haze-free indoor images,” in ACIVS, 2018.
[7] Y. Cui, W. Ren, X. Cao, and A. Knoll, “Focal network for image restoration,” in ICCV, 2023.
[8] B. Li, W. Ren, D. Fu, D. Tao, D. Feng, W. Zeng, and Z. Wang, “Benchmarking singleimage dehazing and beyond,” IEEE TIP, 2019.
[9] C.-L. Guo, Q. Yan, S. Anwar, R. Cong, W. Ren, and C. Li, “Image dehazing transformer with transmission-aware 3d position embedding,” in CVPR, 2022.
[10] M. Fu, H. Liu, Y. Yu, J. Chen, and K. Wang, “Dw-gan: A discrete wavelet transform gan for nonhomogeneous dehazing,” in CVPRW, 2021.
[11] H. Wu, Y. Qu, S. Lin, J. Zhou, R. Qiao, Z. Zhang, Y. Xie, and L. Ma, “Contrastive learning for compact single image dehazing,” in CVPR, 2021.
[12] Y. Song, Z. He, H. Qian, and X. Du, “Vision transformers for single image dehazing,” IEEE TIP, 2023.
[13] X. Liu, Y. Ma, Z. Shi, and J. Chen, “Griddehazenet: Attention-based multi-scale network for image dehazing,” in ICCV, 2019.
[14] Q. Deng, Z. Huang, C.-C. Tsai, and C.-W. Lin, “Hardgan: A haze-aware representation distillation gan for single image dehazing,” in ECCV, 2020.
[15] H. Yu, N. Zheng, M. Zhou, J. Huang, Z. Xiao, and F. Zhao, “Frequency and spatial dual guidance for image dehazing,” in ECCV, 2022.
[16] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in NeurIPS, 2017.
[17] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” in ICLR, 2021.
[18] H. Chen, Y. Wang, T. Guo, C. Xu, Y. Deng, Z. Liu, S. Ma, C. Xu, C. Xu, and W. Gao, “Pre-trained image processing transformer,” in CVPR, 2021.
[19] R. Ranftl, A. Bochkovskiy, and V. Koltun, “Vision transformers for dense prediction,” in ICCV, 2021.
[20] X. Qin, Z. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in AAAI, 2020.
[21] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in ICCV, 2021.
[22] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in ICCV, 2017.
[23] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” in NeurIPS (I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds.), 2017.
[24] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley, “Least squares generative adversarial networks,” in ICCV, 2017.
[25] A. Shoshan, N. Bhonker, I. Kviatkovsky, and G. Medioni, “Gan-control: Explicitly controllable gans,” in ICCV, 2021.
[26] M. Kowalski, S. J. Garbin, V. Estellers, T. Baltrušaitis, M. Johnson, and J. Shotton, “Config: Controllable neural face image generation,” in ECCV, 2020.
[27] S. Akash, V. Lazar, R. Chris, G. M. U., and S. Charles, “Veegan: Reducing mode collapse in gans using implicit variational learning,” in NeurIPS, 2017.
[28] Q. Mao, H.-Y. Lee, H.-Y. Tseng, S. Ma, and M.-H. Yang, “Mode seeking generative adversarial networks for diverse image synthesis,” in CVPR, 2019.
[29] H. Mu, H. Le, B. Yikai, R. Jian, X. Jin, and Y. Jian, “Ra-depth: Resolution adaptive selfsupervised monocular depth estimation,” in ECCV, 2022.
[30] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” in CVPR, 2017.
[31] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in ECCV, 2016.
[32] Z. Lou, H. Xu, F. Mu, Y. Liu, X. Zhang, L. Shang, J. Li, B. Guan, Y. Li, and Y. H. Hu, “Simhaze: game engine simulated data for real-world dehazing,” arXiv preprint arXiv:2305.16481, 2023.
[33] J. Gui, X. Cong, Y. Cao, W. Ren, J. Zhang, J. Zhang, J. Cao, and D. Tao, “A comprehensive survey and taxonomy on single image dehazing based on deep learning,” ACM Computing Surveys, 2023.
[34] X. Zhang, H. Dong, J. Pan, C. Zhu, Y. Tai, C. Wang, J. Li, F. Huang, and F. Wang, “Learning to restore hazy video: A new real-world dataset and a new method,” in CVPR, 2021.
[35] Z. Chen, Y. Wang, Y. Yang, and D. Liu, “Psd: Principled synthetic-to-real dehazing guided by physical priors,” in CVPR, 2021.
[36] M. K. Othman and A. A. Abdulla, “Enhanced single image dehazing technique based on hsv color space,” UHDJST, 2022.
[37] E. J. McCartney, “Optics of the atmosphere: scattering by molecules and particles,” New York, 1976.
[38] S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE TPAMI, 2003.
[39] S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” in ICCV, 1999.
[40] C. O. Ancuti, C. Ancuti, M. Sbert, and R. Timofte, “Dense haze: A benchmark for image dehazing with dense-haze and haze-free images,” in ICIP, 2019.
[41] C. Li, C. Guo, J. Guo, P. Han, H. Fu, and R. Cong, “Pdr-net: Perception-inspired single image dehazing network with refinement,” IEEE TMM, 2019.
[42] Y. Qu, Y. Chen, J. Huang, and Y. Xie, “Enhanced pix2pix dehazing network,” in CVPR, 2019.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *