帳號:guest(3.145.75.49)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):張克齊
作者(外文):Chang, Ke-Chi
論文名稱(中文):相機雜訊模型學習
論文名稱(外文):Learning Camera-Aware Noise Models
指導教授(中文):陳煥宗
指導教授(外文):Chen, Hwann-Tzong
口試委員(中文):林彥宇
陳嘉平
劉庭祿
口試委員(外文):Lin, Yen-Yu
Chen, Chia-Ping
Liu, Tyng-Luh
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:107062557
出版年(民國):109
畢業學年度:108
語文別:英文
論文頁數:39
中文關鍵詞:雜訊生成模型
外文關鍵詞:NoiseDenoisingGANs
相關次數:
  • 推薦推薦:0
  • 點閱點閱:599
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
在影像處理與電腦視覺領域中,影像雜訊模型是很重要的問題,過去有很多
透過統計方法制定的雜訊模型,但這些方法仍然與現實世界的雜訊存在差距。
為了解決這樣的問題,在本論文中,我們提出一套透過資料驅動的方法,從
有限的清晰與雜訊的成對影像資料中,學習生成雜訊的模型。我們提出的方
法可以做到感知不同的相機,並一次性地學習出代表不同相機雜訊的特徵,
進而生成專屬於某種相機的雜訊。從實驗結果可以清楚發現,我們的方法在
效能評比數據及生成雜訊影像品質上,都超越了目前既有的統計模型與基於
深度學習的雜訊模型。
Modeling imaging sensor noise is a fundamental problem for image processing and computer vision applications. While most previous works adopt statistical noise models, real-world noise is far more complicated and beyond what these models can describe. To tackle this issue, we propose a data-driven approach, where a generative noise model is learned from real-world noise. The proposed noise model is camera-aware, that is, different noise characteristics of different camera sensors can be learned simultaneously, and a single learned noise model can generate different noise for different camera sensors. Experimental results show that our method quantitatively and qualitatively outperforms existing statistical noise models and learning-based methods.
List of Tables 5
List of Figures 6
摘要 8
Abstract 9
Introduction 10
Related work 12
Our Approach 14
Experiments 20
Application to Real Image Denoising 32
Conclusion and Future Work 35
Bibliography 36
[1] A. Abdelhamed, M. A. Brubaker, and M. S. Brown. Noise flow: Noise modeling with conditional normalizing flows. In Proceedings of the IEEE International Conference on Computer Vision, pages 3165–3173, 2019.
[2] A. Abdelhamed, S. Lin, and M. S. Brown. A high-quality denoising dataset for smartphone cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1692–1700, 2018.
[3] T. Brooks, B. Mildenhall, T. Xue, J. Chen, D. Sharlet, and J. T. Barron. Unprocessing images for learned raw denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11036–11045, 2019.
[4] J. Chen, J. Chen, H. Chao, and M. Yang. Image blind denoising with generative adversarial network based noise modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3155–3164, 2018.
[5] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian. Image denoising with blockmatching and 3d filtering. In Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, volume 6064, page 606414. International Society for Optics and Photonics, 2006.
[6] A. Foi, M. Trimeche, V. Katkovnik, and K. Egiazarian. Practical poissoniangaussian noise modeling and fitting for single-image raw-data. Trans. Img. Proc., 17(10):1737–1754, Oct. 2008.
[7] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767–5777, 2017.
[8] S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang. Toward convolutional blind denoising of real photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1712–1722, 2019.
[9] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
[10] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017.
[11] D.-W. Kim, J. Ryun Chung, and S.-W. Jung. Grdn: Grouped residual dense network for real image denoising and gan-based real-world noise modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
[12] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[13] C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman. Automatic estimation and removal of noise from a single image. IEEE transactions on pattern analysis and machine intelligence, 30(2):299–314, 2007.
[14] J. Liu, C.-H. Wu, Y. Wang, Q. Xu, Y. Zhou, H. Huang, C. Wang, S. Cai, Y. Ding, H. Fan, et al. Learning raw image denoising with bayer pattern unification and bayer preserving augmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
[15] M.-Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz. Fewshot unsupervised image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision, pages 10551–10560, 2019.
[16] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
[17] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008.
[18] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
[19] T. Plotz and S. Roth. Benchmarking denoising algorithms with real photographs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1586–1595, 2017.
[20] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
[21] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
[22] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. CoRR, abs/1505.04597, 2015.
[23] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823, 2015.
[24] D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
[25] T.-C.Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8798–8807,
2018.
[26] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
[27] B. Xu, N. Wang, T. Chen, and M. Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
[28] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, 2017.
[29] K. Zhang, W. Zuo, and L. Zhang. Ffdnet: Toward a fast and flexible solution for cnnbased image denoising. IEEE Transactions on Image Processing, 27(9):4608–4622, 2018.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *