帳號:guest(3.137.217.220)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):蘇翁台
作者(外文):Su, Weng-Tai
論文名稱(中文):基於學習之影像與信號還原
論文名稱(外文):Image and Signal Restoration based on Learning
指導教授(中文):林嘉文
指導教授(外文):Lin, Chia-Wen
口試委員(中文):邱瀞德
孫明廷
黃朝宗
范國清
廖弘源
口試委員(外文):CHIU, CHING-TE
SUN, MING-TING
HUANG, CHAO-TSUNG
Fan, Kuo-Chin
Liao, Hong-Yuan
學位類別:博士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:105061805
出版年(民國):111
畢業學年度:110
語文別:英文
論文頁數:151
中文關鍵詞:影像還原深度學習圖信號處理人臉仿真太赫茲成像影像去噪影像對比度提升
外文關鍵詞:Image restorationDeep learningGraph signal processingFace hallucinationTerahertz (THz) imagingImage denoisingContrast enhancement
相關次數:
  • 推薦推薦:0
  • 點閱點閱:448
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
本論文的研究重點為基於學習之影像和信號還原技術。第一部份為基於深度學習的影像還原技術,目的是為低品質不清晰的影像回復成高品質的清晰影像並還原細節,現今深度學習為基礎的影像還原技術 (deep learning based image restoration) ,已被證實有著優於傳統人為定義特徵(hand-crafted features)的方法,在電腦視覺問題上達到非常卓越的效能,在本論文中,我們首先將基於深度學習做為基石,探討如何在設計深度學習網路架構上有效地利用先驗知識 (prior information)提升影像還原技術效能。首先,在人臉影像的超解析度技術上,雖然現今使用基於深度學習的架構在視覺上可達到不錯的回復效果,但所回復的人臉影像往往和實際的人臉影像並不相似,造成身分辨識的問題,尤其是在輸入影像在非常低的解析度的情況下。因此,本論文透過在學習過程中使用對比學習 (Contrastive Learning) 嵌入身分資訊,除了在視覺上有著良好的回復品質外,也保留原來人臉影像的身分資訊。
此外,本論文也將影像還原技術應用在太赫茲 (THz)電腦斷層掃描 成像上。由於太赫茲波固有的衍射行為和強水吸收特性會導致各種雜訊以及如深度資訊等物體資訊丟失,現有的研究雖然致力於解決這個問題,但這些方法仍然受到太赫茲光束的衍射限制。為了解決這個問題,本論文從頻域中提取豐富的頻譜振幅和頻譜相位資訊作為先驗知識,利通過設計深度學習網路從兩者不同特性的信號中,學習出有用的特徵資訊作為輔助並引導回復太赫茲影像,而無需任何額外的計算成本或設備,這有利於提高太赫茲成像結果。


論文的第二部分為基於圖信號學習之影像和信號還原技術。雖然現今深度學習的方法在各種電腦視覺應用中均取得了卓越的性能,但深度模型中的參數純粹是從資料中學習而來,時至今日還是無法解釋用數學解釋。且當訓練資料和測試資料特性不同時,效果會有顯著的下降。因此本論文導入了圖信號處理技術,首先運用圖形信號是平滑特性的先驗知識 並進一步在圖拓譜的建圖中引入邊的副權重特性,同時將正和副的相似性信息納入經典的最大事後機率計算中並運用在分類問題上。現有的深度學習的還原方法,並不借助任何顯式變換模型,而單純從具有代表性的大數據中直接學習結果不同,本論文將圖信號處理和深度學習做結合應用在影像去噪上,利用圖信號先驗建構了一個新的圖神經網絡(GNN),並採用了事先定義好的可解析的圖濾波器 (analytical graph filter),其無需大量訓練資料,並且我們僅通過用CNN學習如何優化並建構適當圖拓撲以達到端到端的學習架構,並解決在實際的應用上當訓練和測試資料在不相同的特性下效果下降的問題。最後,本論文拓展至影像去噪和對比度強化的問題上,我們提出了一種混合圖學習/分析濾波器算法,將上述的圖信號先驗知識做延伸,分別在數學中的空間平滑度和分段平滑度表示影像的量度和對比度,並讓CNN學習如何優化並建構這兩種不同特性的圖拓撲,並使用正邊緣來對抗噪聲和上述提到的負邊緣來強調對比度。
This dissertation focuses on the development and evaluation of image and signal restoration using learning techniques. The first part is concerned with deep learning-based image restoration techniques, particularly aiming to recover high-resolution (HR) details from low-resolution (LR) face images for identity recognition and to restore corrupted terahertz images for tomography reconstruction. To this end, we adopt deep learning as the backbone of data-driven learning and utilize prior information to devise effective deep neural networks for the two different image restoration tasks. First, we propose a generative adversarial network (GAN) based face hallucination scheme to recover high-resolution details of LR face images to boost the performance of identity recognition. Specifically, we propose an identity-preserving face hallucination GAN that learns to recover HR face image details while retaining the identity information of the original LR face image by embedding the face's identity information into the learning process based on contrastive learning. Second, we propose a novel physics-guided deep restoration network for terahertz (THz) tomographic imaging, an emerging field with great application potentials in industrial inspection, security screening, chemical inspection and non-destructive evaluation. THz imaging, however, suffers from its inherent diffraction behavior, strong water absorption properties, and low noise tolerance, which lead to undesired blurs and distortions of reconstructed THz images. The performances of existing restoration methods are highly constrained by the diffraction-limited THz signals. To address the problem, we propose a multi-view Subspace-Attention-guided Restoration Network (SARNet) that fuses multi-view and multi-spectral features of THz images for effective image restoration and 3D tomographic reconstruction. To this end, SARNet uses multi-scale branches to extract intra-view spatio-spectral amplitude and phase features and fuse them via shared subspace projection and self-attention guidance.


The second part of the dissertation deals with image and signal restoration based on graph signal learning. Although modern deep learning methods have achieved excellent performances in various computer vision applications, deep learning models are usually learned purely from data and cannot be explained mathematically. Moreover, the performance of a deep learning model can be significantly degraded when there exists a domain gap between the model's training data and testing data. Therefore, this dissertation introduces graph signal processing (GSP) techniques, which use graph signal priors such as smoothness and sparsity priors, to achieve effective image and signal restoration. To this end, we introduce negative-edge weights in the graph topology construction for a classification problem, and incorporate the positive and negative similarity information into the classical maximum a posteriori (MAP) formulation at the same time to solve the problem. Unlike existing deep learning-based restoration methods, which do not resort to any explicit transform model but learns mainly from data, this dissertation combines graph signal processing with deep learning for image denoising. Specifically, our method constructs a graph neural network (GNN) based on graph signal priors and utilizes analytical graph filters which so not require learning. Our method optimizes the restoration performance in an end-to-end manner only via learning of an appropriate graph topology, rather than learning the filters, at each layer. In this way, it achieves effective image denoising even when the training and testing data have different characteristics. Finally, we further address the problem of joint image denoising and contrast enhancement. We propose a hybrid graph learning/analytic filtering algorithm that extends the above graph signal priors to represent the illumination and reflectance components of an image to promote the spatial smoothness and piecewise smoothness of the image, respectively. Our approach allows GNNs to learn how to optimize and construct graph topologies based on these two smoothness priors, and use positive edges to combat against noise and negative edges to highlight contrast, respectively.
Abstract I
Abstract (Chinese) IV
Acknowledgements VI
Contents VII
1 Overview of Dissertation . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Contributions of Dissertation . . . . . . . . . . . . . . . . . .4
1.3 Dissertation Organization . . . . . . . . . . . . . . . . . . . .6
2 Part I: Image restoration based on Deep Learning . . . . . . . . . 7
3 Identity-Preserving Face Hallucination . . .. . . . . . . . . . . 9
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . .12
3.3 Overview of the Proposed Method . . . . . . . . . . . . . . . . 14
3.4 Identity-Preserving Face Hallucination . . . . . . . . . . . . .15
3.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . .20
3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4 Terahertz Tomographic Imaging . . . . . . . . . . . . . . . . . . 34
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . .39
4.3 Physics-Guided THz Imaging . . . . . . . . . . . . . . . . . . .41
4.4 Overview of the Proposed Method . . . . . . . . . . . . . . . . 46
4.5 Terahertz Tomographic Imaging . . . . . . . . . . . . . . . . . 48
4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . .56
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5 Part II: Graph Learning-Based Image and Signal Restoration . . . .66
6 Graph Classifier Learning with Negative Edge Weights . . . . . . 68
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .68
6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.3 Graph Smoothness . . . . . . . . . . . . . . . . . . . . . . . 73
6.4 Generalized Smoothness . .. . . . . . . . . . . . . . . . . . . 79
6.5 Graph Construction . . . . . . . . . . . . . . . . . . . . . . 83
6.6 Finding A Perturbation Matrix . . . . . . . . . . . . . . . . . 86
6.7 Algorithm Development . . . . . . . . . . . . . . . . . . . . . 90
6.8 Experimental Results . . . . . . . . . . . . . . . .. . . . . . 92
6.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
7 Image Denoising Based on Analytical Graph Filters . . . . . . . 101
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 101
7.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.3 Overview of the Proposed Method . . . . . . . . . . . . . . . .105
7.4 Analytical Graph Filters . . . . . . . . . . . . . . . . . . . 105
7.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . 110
7.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
8 Graph-based Joint Denoising and Constrast Enhancement . . . . . .114
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 119
8.3 Overview of the Proposed Method . . . . . . . . . . . . . . . .120
8.4 Dual Graph Filters . . . . . . . . . . . . . . . . . . . . . . 121
8.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . 126
8.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .129
9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . .130
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
[1] Y.-C. Sun, “Does text readability matter? a study of paraphrasing and plagiarism in english as a foreign language writing context.” Asia-Pacific Education Researcher (De La Salle University Manila), vol. 21, no. 2, 2012.
[2] K. Porter, “The value of a college degree. eric digest.” ERIC, 2002.
[3] X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993, 2016.
[4] S. Wang, J. Zheng, H.-M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for non-uniform illumination images,” IEEE Trans. Image Process., vol. 22, no. 9, pp. 3538–3548, 2013.
[5] X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding, “A weighted variational model
for simultaneous reflectance and illumination estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2782–2790.
[6] Z. Ying, G. Li, Y. Ren, R. Wang, and W. Wang, “A new image contrast enhancement algorithm using exposure fusion framework,” in International Conference on Computer Analysis of Images and Patterns. Springer, 2017, pp. 36–46.
[7] R. Vemulapalli, O. Tuzel, and M.-Y. Liu, “Deep gaussian conditional random field network: A model-based deep network for discriminative denoising,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4801–4809.
[8] K. Zhang, W. Z. ana Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, 2017.
[9] C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in European conference on computer vision. Springer, 2014, pp. 184–199.
[10] J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1646–1654.
[11] X. Tao, H. Gao, X. Shen, J. Wang, and J. Jia, “Scale-recurrent network for deep image deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8174–8182.
[12] P. Milanfar, “A tour of modern image filtering,” in IEEE Signal Processing Magazine, vol. 30, no.1, January 2013, pp. 106–128.
[13] O. Rioul and M. Vetterli, “Wavelets and signal processing,” in IEEE Signal Processing Magazine, vol. 8, no.4, October 1991, pp. 14–38.
[14] I. Tosic and P. Frossard, “Dictionary learning,” in IEEE Signal Processing Magazine, vol. 28, no.2, February 2011, pp. 27–38.
[15] S. G. Chang, B. Yu, and M. Vetterli, “Adaptive wavelet thresholding for image denoising and compression,” IEEE Trans. Image Process., vol. 9, no. 9, pp. 1532–1546, 2000.
[16] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process., vol. 16, no. 8, pp. 2080–2095, 2007.
[17] S. Gu, L. Zhang, W. Zuo, and X. Feng, “Weighted nuclear norm minimization with application to image denoising,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 2862–2869.
[18] J. Pang and G. Cheung, “Graph laplacian regularization for image denoising: Analysis in the continuous domain,” IEEE Trans. Image Process., vol. 26, no. 4, pp. 1770–1785, 2017.
[19] S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 9, pp. 1167–1183, 2002.
[20] ——, “Hallucinating faces,” in Proc. IEEE Int. Conf. Automatic Face and Gesture Recognit. IEEE, 2000, pp. 83–88.
[21] N. Wang, D. Tao, X. Gao, X. Li, and J. Li, “A comprehensive survey to face hallucination,” Int. J. comput. Vis., vol. 106, no. 1, pp. 9–30, 2014.
[22] W. W. Zou and P. C. Yuen, “Very low resolution face recognition problem,” IEEE Trans. Image Process., vol. 21, no. 1, pp. 327–340, January 2012.
[23] Z. Cheng, X. Zhu, and S. Gong, “Surveillance face recognition challenge,” arXiv preprint arXiv:1804.09691, 2018.
[24] P. H. Hennings-Yeomans, S. Baker, and B. V. Kumar, “simultaneous super-resolution and feature extraction for recognition of low-resolution faces,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., June 2008, pp. 770–778.
[25] J. Wu, S. Ding, W. Xu, and H. Chao, “Deep joint face hallucination and recognition,” arXiv preprint arXiv:1611.08091, 2016. [26] O. Tuzel, Y. Taguchi, and J. R. Hershey, “Global-local face upsampling network,” arXiv preprint arXiv:1603.07235, 2016.
[27] X. Yu and F. Porikli, “Ultra-resolving face images by discriminative generative networks,” in Proc. European Conf. Comput. Vis. Springer, 2016, pp. 318–333.
[28] S. Zhu, S. Liu, C. C. Loy, and X. Tang, “Deep cascaded bi-network for face hallucination,” in Proc. European Conf. Comput. Vis. Springer, 2016, pp. 614–630.
134
[29] Y. Song, J. Zhang, S. He, L. Bao, and Q. Yang, “Learning to hallucinate face images via component generation and enhancement,” in Proc. Int. Joint Conf. Artificial Intell., 2017, pp. 4537–4543.
[30] A. Bulat and G. Tzimiropoulos, “Super-FAN: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with GANs,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., June 2018.
[31] Q. Cao, L. Lin, Y. Shi, X. Liang, and G. Li, “Attention-aware face hallucination via deep reinforcement learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., July 2017, pp. 1656–1664.
[32] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair,
A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 2672–2680.
[33] C. Ledig, L. Theis, F. Husz´ar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic single image super-resolution using a generative adversarial network,” arXiv preprint arXiv:1609.04802, 2016.
[34] A. Bulat, J. Yang, and G. Tzimiropoulos, “To learn image super-resolution use a gan to learn how to do image degradation first,” Proc. European Conf. Comput. Vis., 2018.
[35] H. Huang, R. He, S. Z., and T. T., “Wavelet-srnet: A wavelet-based cnn for multi-scale face super resolution,” in Proc. Int. Conf. Comput. Vis., 2017, pp. 1689–1697.
[36] Y. Chen, Y. Tai, X. Liu, C. Shen, and J. Wang, “Fsrnet: End-to-end learning face superresolution with facial priors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 2492–2501.
[37] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. S¨ackinger, and R. Shah, “Signature verification using a siamese time delay neural network,” Int. J. Pattern Recognit. Artificial Intell., vol. 7, no. 4, pp. 669–688, 1993.
135
[38] S. Chopra, R. Hadsell, and Y. LeCun, “Learning a similarity metric discriminatively, with application to face verification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2005, pp. 539–546.
[39] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
[40] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015.
[41] A. Odena, V. Dumoulin, and C. Olah, “Deconvolution and checkerboard artifacts,” Distill, 2016, http://distill.pub/2016/deconv-checkerboard.
[42] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[43] R. Dahl, M. Norouzi, and J. Shlens, “Pixel recursive super resolution,” arXiv preprint arXiv:1702.00783, 2017.
[44] B. Amos, B. Ludwiczuk, and M. Satyanarayanan, “Openface: A general-purpose face recognition library with mobile applications,” CMU-CS-16-118, CMU School of Computer Science, Tech. Rep., 2016.
[45] D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Learning face representation from scratch,” arXiv preprint arXiv:1411.7923, 2014.
[46] E. Learned-Miller, G. B. Huang, A. RoyChowdhury, H. Li, and G. Hua, “Labeled faces in the wild: A survey,” in Advances in Face Detection and Facial Image Analysis. Springer, 2016, pp. 189–248.
[47] Z. Liu, P. Luo, X.Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proc. Int. Conf. Comput. Vis., December 2015.
[48] G. Huang, Z. Liu, L. van der Maaten, and K. Weinberger, “Densely connected convolutional networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., July 2017. 136
[49] C. Ding and D. Tao, “Robust face recognition via multimodal deep face representation,” IEEE Trans. Multimedia, vol. 17, no. 11, pp. 2049–2058, November 2015.
[50] M. Kamruzzaman, G. ElMasry, D.-W. Sun, and P. Allen, “Application of nir hyperspectral imaging for discrimination of lamb muscles,” J. Food Engineer., vol. 104, no. 3, pp. 332– 340, 2011.
[51] H. H. Rotermund, W. Engel, S. Jakubith, A. Von Oertzen, and G. Ertl, “Methods and application of uv photoelectron microscopy in heterogenous catalysis,” Ultramicroscopy, vol. 36, no. 1-3, pp. 164–172, 1991.
[52] L. Yujiri, M. Shoucri, and P. Moffa, “Passive millimeter wave imaging,” IEEE Microwave Mag., vol. 4, no. 3, pp. 39–50, 2003.
[53] A. Abbas, M. Abdelsamea, and M. Gaber, “Classification of covid-19 in chest x-ray images using detrac deep convolutional neural network,” Appl. Intell., vol. 51, no. 2, pp. 854–864, 2021.
[54] A. R. Round, S. J.Wilkinson, C. J. Hall, K. D. Rogers, O. Glatter, T.Wess, and I. O. Ellis, “A preliminary study of breast cancer diagnosis using laboratory based small angle x-ray scattering,” Physics in Medicine & Biology, vol. 50, no. 17, p. 4159, 2005.
[55] T. M. Tuan, H. Fujita, N. Dey, A. S. Ashour, T. N. Ngoc, D.-T. Chu et al., “Dental diagnosis from x-ray images: an expert system based on fuzzy computing,” Biomed. Signal Process. Control, vol. 39, pp. 64–73, 2018.
[56] X. Xie, “A review of recent advances in surface defect detection using texture analysis techniques,” ELCVIA: Electron. Lett. Comput. Vis. Iimage Ana., pp. 1–22, 2008.
[57] A. B. de Gonzalez and S. Darby, “Risk of cancer from diagnostic x-rays: estimates for the uk and 14 other countries,” The Lancet, vol. 363, no. 9406, pp. 345–351, 2004.
[58] E. braham, A. Younus, T.-C. Delagnes, and P. Mounaix, “Non-invasive investigation of art paintings by terahertz imaging,” Applied Physics A, vol. 100, no. 3, pp. 585–590, 2010. 137
[59] Y. Calvin, F. Shuting, S. Yiwen, and P.-M. Emma, “The potential of terahertz imaging forcancer diagnosis: A review of investigations to date,” Quantitative imaging in medicine and surgery, vol. 2, no. 1, p. 33, 2012.
[60] D. Chapman, W. Homlinson, R. Johnston, D. Washburn, E. Pisano, N. Gm¨ur, Z. Zhong, R. Menk, F. Arfelli, and D. Sayers, “Diffraction enhanced x-ray imaging,” J. Physics in Medicine & Biology, vol. 42, no. 11, p. 2015, 1997.
[61] R. Fitzgerald, “Phase-sensitive x-ray imaging,” Physics Today, vol. 53, no. 7, pp. 23–26, 2000.
[62] A. akdinawat and D. Attwood, “Nanoscale x-ray imaging,” Nature Photonics, vol. 4, no. 12, p. 840, 2010.
[63] P. Cloetens, R. arrett, J. Baruchel, J.-P. Guigay, and M. Schlenker, “Phase objects in synchrotron radiation hard x-ray imaging,” J. Physics D: Applied Physics, vol. 29, no. 1, p. 133, 1996.
[64] J. Peterson, F. Paerels, J. Kaastra, M. Arnaud, T. Reiprich, A. Fabian, R. Mushotzky, J. Jernigan, and I. Sakelliou, “X-ray imaging-spectroscopy of abell 1835,” J. Astronomy & Astrophysics, vol. 365, no. 1, pp. L104–L109, 2001.
[65] D. Saeedkia, Handbook of terahertz technology for imaging, sensing and communications. Elsevier, 2013.
[66] D. Mittleman, M. Gupta, R. Neelamani, R. Baraniuk, J. Rudd, and M. Koch, “Recent advances in terahertz imaging,” Applied Physics B, vol. 68, no. 6, pp. 1085–1094, 1999.
[67] C. Jansen, S. Wietzke, O. Peters, M. Scheller, N. Vieweg, M. Salhi, N. Krumbholz, C. J¨ordens, T. Hochrein, and M. Koch, “Terahertz imaging: applications and perspectives,” Appl. Optics, vol. 49, no. 19, pp. E48–E57, 2010. [68] D. M. Mittleman, “Twenty years of terahertz imaging,” Optics Express, vol. 26, no. 8, pp. 9417–9431, 2018. 138
[69] K. Kawase, Y. Ogawa, Y.Watanabe, and H. Inoue, “Non-destructive terahertz imaging of illicit drugs using spectral fingerprints,” Optics Express, vol. 11, no. 20, pp. 2549–2554, 2003.
[70] K. Fukunaga, Thz technology applied to cultural heritage in practice. Springer, 2016.
[71] T. Bowman, T. Chavez, K. Khan, J. Wu, A. Chakraborty, N. Rajaram, K. Bailey, and M. El-Shenawee, “Pulsed terahertz imaging of breast cancer in freshly excised murine tumors,” J. Biomedical optics, vol. 23, no. 2, p. 026004, 2018.
[72] Y.-C. Hung and S.-H. Yang, “Terahertz deep learning computed tomography,” in Proc. Int. Infrad. Milli. THz. Wav. IEEE, 2019, pp. 1–2.
[73] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Medical Image Comput. Comput.-Assisted Intervention, 2015, pp. 234–241.
[74] S. Cheng, Y. Wang, H. Huang, D. Liu, H. Fan, and S. Liu, “NBNet: Noise basis learning for image denoising with subspace projection,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. Pattern Recognit., 2021, pp. 4896–4906. [75] M. Born and E. Wolf, Principles of optics: electromagnetic theory of propagation, interference
and diffraction of light. Elsevier, 2013.
[76] E. Hack and P. Zolliker, “Terahertz holography for imaging amplitude and phase objects,” Optics Express, vol. 22, no. 13, pp. 16 079–16 086, 2014.
[77] B. Recur, J.-P. Guillet, I. Manek-H¨onninger, J.-C. Delagnes,W. Benharbone, P. Desbarats, J.-P. Domenger, L. Canioni, and P. Mounaix, “Propagation beam consideration for 3d thz computed tomography,” Optics express, vol. 20, no. 6, pp. 5817–5829, 2012.
[78] W.-t. Su, T.-H. Chao, S.-H. Yang, and C.-W. Lin, “Seeing through a black box: Toward high-quality terahertz tomographicimaging via multi-scale spatio-spectral image fusion,” arXiv preprint arXiv:2103.16932, 2021. 139
[79] X. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Proc. Adv. Neural Inf. Process. Syst., 2016, p. 2802–2810.
[80] S. Zhou, J. Zhang, J. Pan, H. Xie, W. Zuo, and J. Ren, “Spatio-temporal filter adaptive network for video deblurring,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2019, pp. 2482–2491.
[81] E. Kang, J. Min, and J. C. Ye, “A deep convolutional neural network using directional wavelets for low-dose x-ray ct reconstruction,” J. Medical physics, vol. 44, no. 10, pp. e360–e375, 2017.
[82] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Process., vol. 26, no. 9, pp. 4509–4522, 2017.
[83] B. Zhu, J. Z. Liu, S. F. Cauley, R. B. Rosen, and M. S.Rosen, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, no. 7697, pp. 487–492, 2018.
[84] R. Schultz, T. Nielsen, R. J. Zavaleta, R. Wyatt, and H. Garner, “Hyperspectral imaging: a novel approach for microscopic analysis,” Cytometry, vol. 43, no. 4, pp. 239–247, 2001.
[85] A. Ozdemir and K. Polat, “Deep learning applications for hyperspectral imaging: a systematic review,” Journal of the Institute of Electronics and Computer, vol. 2, no. 1, pp. 39–56, 2020.
[86] P. Geladi, J. Burger, and T. Lestander, “Hyperspectral imaging: calibration problems and solutions,” Chemometrics and intelligent laboratory systems, vol. 72, no. 2, pp. 209–217, 2004.
[87] C. Janke, M. F¨orst, M. Nagel, H. Kurz, and A. Bartels, “Asynchronous optical sampling for high-speed characterization of integrated resonant terahertz sensors,” Optics Lett., vol. 30, no. 11, pp. 1405–1407, 2005. 140
[88] T. D. Dorney, R. G. Baraniuk, and D. M. Mittleman, “Material parameter estimation with terahertz time-domain spectroscopy,” JOSA A, vol. 18, no. 7, pp. 1562–1571, 2001.
[89] D. C. Popescu and A. D. Ellicar, “Point spread function estimation for a terahertz imaging system,” EURASIP J. Adv. Signal Process., vol. 2010, no. 1, p. 575817, 2010.
[90] D. C. Popescu, A. Hellicar, and Y. Li, “Phantom-based point spread function estimation for terahertz imaging system,” in Proc. Int. Conf. Adv. Concepts for Intell. Vis. Syst., 2009, pp. 629–639.
[91] T. M. Wong, M. Kahl, P. H. Bol´ıvar, and A. Kolb, “Computational image enhancement for frequency modulated continuous wave (fmcw) thz image,” J. Infrared, Millimeter, and Terahertz Waves, vol. 40, no. 7, pp. 775–800, 2019.
[92] M. Ljubenovic, S. Bazrafkan, J. D. Beenhouwer, and J. Sijbers, “Cnn-based deblurring of terahertz images.” in Proce. IEEE Conf. Comput. Vis. Theory Appl., 2020, pp. 323–330.
[93] T. M.Wong, M. Kahl, P. Haring-Bol´ıvar, A. Kolb, and M.M¨oller, “Training auto-encoderbased optimizers for terahertz image reconstruction,” in German Conf. Pattern Recognit., 2019, pp. 93–106.
[94] M. Van Exter, C. Fattinger, and D. Grischkowsky, “Terahertz time-domain spectroscopy of water vapor,” Optics Lett., vol. 14, no. 20, pp. 1128–1130, 1989.
[95] D. M. Slocum, E. J. Slingerland, R. H. Giles, and T. M. Goyette, “Atmospheric absorption of terahertz radiation and water vapor continuum effects,” J. Quantitative Spectroscopy and Radiative Transfer, vol. 127, pp. 49–63, 2013.
[96] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, “Self-attention generative adversarial networks,” in Proc. Int. Conf. Mach. learn., 2019, pp. 7354–7363.
[97] C. D. Meyer, Matrix Analysis and Applied Linear Algebra. Siam, 2000, vol. 71.
[98] X. Qin, X. Wang, Y. Bai, X. Xie, and H. Jia, “Ffa-net: Feature fusion attention network for single image dehazing,” in Proc. of the AAAI Conf. on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 11 908–11 915. 141
[99] A. C. Kak, “Algorithms for reconstruction with nondiffracting sources,” Principles of computerized tomographic imaging, pp. 49–112, 2001.
[100] B. Recur, A. Younus, S. Salort, P. Mounaix, B. Chassagne, P. Desbarats, J. Caumes, and E. Abraham, “Investigation on reconstruction methods applied to 3d terahertz computed tomography,” Optics Express, vol. 19, no. 6, pp. 5105–5117, 2011.
[101] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2015, pp. 1026–1034.
[102] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” IEEE signal processing magazine, vol. 30, no. 3, pp. 83–98, 2013.
[103] A. Ortega, P. Frossard, J. M. Kovcevic, and P. Vandergheynst, “Graph signal processing: Overview, challenges, and applications,” Proceedings of the IEEE, vol. 106, no. 5, pp. 808–828, 2018.
[104] G. Cheung, E. Magli, Y. Tanaka, and M. K. Ng, “Graph spectral image processing,” Proceedings of the IEEE, vol. 106, no. 5, pp. 907–930, 2018.
[105] W. Hu, X. Li, G. Cheung, and O. Au, “Depth map denoising using graph-based transform and group sparsity,” in IEEE International Workshop on Multimedia Signal Processing, Pula, Italy, October 2013, pp. 1–6.
[106] J. Pang and G. Cheung, “Graph Laplacian regularization for image denoising: Analysis inn the continuous domain,” in IEEE Transactions on Image Processing, vol. 26, no.4, April 2017, pp. 1770–1785.
[107] S. K. Narang, A. Gadde, E. Sanou, and A. Ortega, “Localized iterative methods for interpolation in graph structured data,” in Symposium on Graph Signal Processing in IEEE 142 Global Conference on Signal and Information Processing (GlobalSIP), Austin, TX, December 2013.
[108] S. K. Narang, A. Gadde, and A. Ortega, “Signal processing techniques for interpolation of graph structured data,” in IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, Canada, May 2013.
[109] Y. Mao, G. Cheung, and Y. Ji, “On constructing z-dimensional DIBR-synthesized images,” in IEEE Transactions on Multimedia, vol. 18, no.8, August 2016, pp. 1453–1468.
[110] P. Wan, G. Cheung, D. Florencio, C. Zhang, and O. Au, “Image bit-depth enhancement via maximum-a-posteriori estimation of AC signal,” in IEEE Transactions on Image Processing, vol. 25, no.6, June 2016, pp. 2896–2909.
[111] X. Liu, G. Cheung, X. Wu, and D. Zhao, “Inter-block soft decoding of JPEG images with sparsity and graph-signal smoothness priors,” in IEEE International Conference on Image Processing, Quebec City, Canada, September 2015.
[112] W. Hu, G. Cheung, and M. Kazui, “Graph-based dequantization of block-compressed piecewise smooth images,” in IEEE Signal Processing Letters, vol. 23, no.2, February 2016, pp. 242–246.
[113] C. Bishop, Pattern Recognition and Machine Learning. Springer, 2006.
[114] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Scholkopf, “Learning with local and global consistency,” in 16th International Conference on Neural Information Processing (NIPS), Whistler, Canada, December 2003.
[115] M. Belkin, I. Matveeva, and P. Niyogi, “Regularization and semi-supervised learning on large graphs,” in Shawe-Taylor J., Singer Y. (eds) Learning Theory, COLT 2004, Lecture Notes in Computer Science, vol. 3120. Springer, Berlin, Heidelberg, 2004, pp. 624–638.
[116] M. Gavish, B. Nadler, and R. Coifman, “Multiscale wavelets on trees, graphs and high dimensional data: Theory and applications to semi supervised learning,” in 27th International Conference on Machine Learning, Haifa, Israel, June 2010. 143
[117] D. Shuman, M. Faraji, and P. Vandergheynst, “Semi-supervised learning with spectral graph wavelets,” in International Conference on Sampling Theory and Applications (SampTA), Singapore, May 2011.
[118] V. Ekambaram, G. Fanti, B. Ayazifar, and K. Ramchandran, “Wavelet-regularized graph semi-supervised learning,” in Symposium on Graph Signal Processing in IEEE Global Conference on Signal and Information Processing (GlobalSIP), Austin, TX, December 2013.
[119] A. Guillory and J. Bilmes, “Label selection on graphs,” in Twenty-Third Annual Conference on Neural Information Processing Systems, Vancouver, Canada, December 2009, pp. 691–699.
[120] L. Zhang, C. Cheng, J. Bu, D. Cai, X. He, and T. Huang, “Active learning based on locally linear reconstruction,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no.10, October 2014, pp. 2026–2038.
[121] S. Chen, A. Sandryhaila, J. Moura, and J. Kovacevic, “Signal recovery on graphs: Variation minimization,” in IEEE Transactions on Signal Processing, vol. 63, no.17, September 2015, pp. 4609–4624.
[122] A. Gadde, A. Anis, and A. Ortega, “Active semi-supervised learning using sampling theory for graph signals,” in ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, August 2014.
[123] F. Chung, Spectral Graph Theory. American Mathematical Society, 1996.
[124] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” in IEEE Signal Processing Magazine, vol. 30, no.3, May 2013, pp. 83–98.
[125] A. Knyazev, “Signed Laplacian for spectral clustering revisited,” January 2017, https://arxiv.org/abs/1701.01394. 144
[126] Y. Mao, G. Cheung, C.-W. Lin, and Y. Ji, “Image classifier learning from noisy labels via generalized graph smoothness priors,” in IEEE IVMSP Workshop, Bordeaux, France, July 2016, pp. 1–5.
[127] E. V. Haynsworth and A. M. Ostrowski, “On the inertia of some classes of partitioned matrices,” in Linear Algebra and its Applications, vol. 1, no.2, 1968, pp. 299–316.
[128] K. Bredies and M. Holler, “A TGV-based framework for variational image decompression, zooming and reconstruction. Part I: Analytics,” in SIAM Jour, vol. 8, no.4, 2015, pp. 2814–2850.
[129] I. Daubechies, R. Devore, M. Fornasier, and S. Gunturk, “Iteratively re-weighted least squares minimization for sparse recovery,” in Communications on Pure and Applied Mathematics, vol. 63, no.1, January 2010, pp. 1–38.
[130] Y. Freund, “A more robust boosting algorithm,” May 2009, https://arxiv.org/abs/0905.2138.
[131] M. Speriosu, N. Sudan, S. Upadhyay, and J. Baldridge, “Twitter polarity classification with label propagation over lexical links and the follower graph,” in Conference on Empirical Methods in Natural Language Processing, Edinburgh, Scotland, July 2011.
[132] Y. Wang and A. Pal, “Detecting emotions in social media: A constrained optimization approach,” in Twenty-Fourh International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, July 2015.
[133] W. Hu, G. Cheung, A. Ortega, and O. Au, “Multi-resolution graph Fourier transform for compression of piecewise smooth images,” in IEEE Transactions on Image Processing, vol. 24, no.1, January 2015, pp. 419–433.
[134] Y. Jin and D. Shuman, “An m-channel critically sampled filter bank for graph signal,” in IEEE International Conference on Acoustics, Speech and Signal Processing, New Orleans, USA, March 2017. 145
[135] J. Zeng, G. Cheung, and A. Ortega, “Bipartite subgraph decomposition for critically sampled wavelet filterbanks on arbitrary graphs,” in IEEE International Conference on Acoustics, Speech and Signal Processing, Shanghai, China, March 2016.
[136] ——, “Bipartite approximation for graph wavelet signal decomposition,” in IEEE Transactions on Signal Processing, vol. 65, no.2, February 2017, pp. 574–589.
[137] H. Shomorony and A. S. Avestimehr, “Sampling large data on graphs,” in Symposium on Graph Signal Processing in IEEE Global Conference on Signal and Information Processing (GlobalSIP), Atlanta, GA, December 2014.
[138] S. Chen, R. Varma, A. Sandryhaila, and J. Kovacevic, “Discrete signal processing on graphs: Sampling theory,” in IEEE Transactions on Signal Processing, vol. 63, no.24, Cecember 2015, pp. 6510–6523.
[139] A. Knyazev and A. Malyshev, “Accelerated graph-based spectral polynomial filters,” in IEEE International Workshop on Machine Learning for Signal Processing, Boston, MA, September 2015, pp. 1–6.
[140] A. Knyazev, “Edge-enhancing filters with negative weights,” in IEEE International Conference on Signal and Information Processing (GlobalSIP), Orlando, Florida, December 2015.
[141] A. Gadde, A. Knyazev, D. Tian, and H. Mansour, “Guided signal reconstruction with application to image magnification,” in IEEE International Conference on Signal and Information Processing (GlobalSIP), Orlando, Florida, December 2015.
[142] D. Zelazo and M. Burger, “On the definiteness of the weighted laplacian and its connection to effective resistance,” in 53rd IEEE Conference on Decision and Control, Los Angeles, CA, December 2014.
[143] Y. Cheng, S. Z. Khong, and T. T. Georgiou, “On the definiteness of graph laplacians with negative weights: Geometrical and passivity-based approaches,” in 2016 American Control Conference, Boston, MA, July 2016. 146
[144] L. Chu et al., “Finding gangs in war from signed networks,” in 22nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining, San Francisco, CA, August 2016.
[145] J. Kunegis, S. Schmidt, A. Lommatzsch, J. Lerner, E. D. Luca, and S. Albayrak, “Spectral analysis of signed graphs for clustering, prediction and visualization,” in SIAM International Conference on Data Mining, Columbus, Ohio, May 2010, pp. 559–570.
[146] A. Sandryhaila and J. Moura, “Discrete signal processing on graphs,” in IEEE Transactions on Signal Processing, vol. 61, no.7, August 2013, pp. 1644–1656.
[147] ——, “Big data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure,” in IEEE Signal Processing Magazine, vol. 31, no.5, August 2014, pp. 80–90.
[148] S. Chen, A. Sandryhaila, J. M. F. Moura, and J. Kovacevic, “Signal denoising on graphs via graph filtering,” in IEEE Global Conference on Signal and Information Processing, Austin, TX, December 2014.
[149] L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” in Physica D, vol. 60, no.1-4, November 1992, pp. 259–268.
[150] A. Brew, D. Greene, and P. Cunningham, “The interaction between supervised learning and crowdsourcing,” in Computational Social Science and the Wisdom of Crowds Workshop at NIPS, Whistler, Canada, December 2010.
[151] J. Shi and J. Malik, “Normalized cuts and image segmentation,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.8, August 2000, pp. 888–905.
[152] J. Z. Huang, M. K. Ng, H. Rong, and Z. Li, “Automated variable weighting in k-means type clustering,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no.5, May 2005, pp. 657–668.
[153] H. Weyl, “Das asymptotische verteilungsgesetz der eigenwerte linearer partieller differentialgleichungen,” in Math. Ann., vol. 71, 1912, pp. 441–479. 147
[154] N. Higham and S. H. Cheng, “Modifying the inertia of matrices arising in optimization,” in ELSEVIER Linear Algebra and its Applications, vol. 275-279, May 1998, pp. 261–279.
[155] G. Golub and C. F. V. Loan, Matrix Computations (Johns Hopkins Studies in the Mathematical Sciences). Johns Hopkins University Press, 2012.
[156] J. A.-F. et al., “Keel: A software tool to assess evolutionary algorithms to data mining problems,” in Soft Computing, vol. 13, no.3, February 2009, pp. 307–318.
[157] L. Spacek, “Face recognition data, university of essex, uk,” http://cswww.essex.ac.uk/mv/allfaces/faces94.html, Feb. 2007.
[158] C. Chow, “On optimum recognition error and reject tradeoff,” in IEEE Transactions on Information Theory, vol. 16, no.1, September 1970, pp. 41–46.
[159] I. Hadji and R. P.Wildes, “A spatiotemporal oriented energy network for dynamic texture recognition,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3066–3074.
[160] S. K. Narang and A. Ortega, “Compact support biorthogonal wavelet filterbanks for arbitrary undirected graphs,” IEEE Trans. Signal Process., vol. 61, no. 19, pp. 4673–4685, 2013.
[161] A. Ortega, P. Frossard, J. Kovacevic, J. M. F. Moura, and P. Vandergheynst, “Graph signal processing: Overview, challenges, and applications,” in Proceedings of the IEEE, vol. 106, no.5, May 2018, pp. 808–828.
[162] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of the IEEE International Conference on Computer Vision, Bombay, India, 1998.
[163] J. Zeng, J. Pang,W. Sun, and G. Cheung:, “Deep graph Laplacian regularization for robust denoising of real images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019, pp. 0–0.
[164] M. J. D. Powell, “Restart procedures for the conjugate gradient method,” Mathematical Programming, vol. 12, no. 1, pp. 241–254, 1977. 148
[165] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell., 2020.
[166] K. Zhang, W. M. Zuo, and L. Zhang, “FFDNet: Toward a fast and flexible solution for cnn-based image denoising,” IEEE Trans. Image Process., vol. 27, no. 9, pp. 4608–4622, 2018.
[167] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
[168] P. Chatterjee and P. Milanfar, “Patch-based near-optimal image denoising,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1635–1649, 2011.
[169] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012, pp. 1097–1105.
[170] W. Hu, G. Cheung, and M. Kazui, “Graph-based dequantization of block-compressed piecewise smooth images,” IEEE Signal Processing Letters, vol. 23, no. 2, pp. 242–246, 2015.
[171] X. Liu, D. Zhai, D. Zhao, G. Zhai, andW. Gao, “Progressive image denoising through hybrid graph Laplacian regularization: A unified framework,” IEEE Transactions on Image Processing, vol. 23, no. 4, pp. 1491–1503, 2014.
[172] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process., vol. 15, no. 12, pp. 3736–3745, 2006.
[173] R. S. Varga, Gersgorin and his circles. Springer Science & Business Media, 2010, vol. 36.
[174] X. Liu, G. Cheung, X. Ji, D. Zhao, and W. Gao, “Graph-based joint dequantization and contrast enhancement of poorly lit JPEG images,” in IEEE Trans. Image Process., vol. 28, no.3, March 2019, pp. 1205–1219. 149
[175] X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” in Proc. IEEE Int. Symp. Circuits Syst., 2018, pp. 1–5.
[176] R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. Sobel, “A variational framework for retinex,” International Journal of computer vision, vol. 52, no. 1, pp. 7–23, 2003.
[177] E. Land and J. McCann, “Lightness and retinex theory,” Josa, vol. 61, no. 1, pp. 1–11, 1971.
[178] C. Dong, C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 2, pp. 295–307, 2015.
[179] W.-T. Su, G. Cheung, R.Wildes, and C.-W. Lin, “Graph neural net using analytical graph filters and topology optimization for image denoising,” in arXiv preprint:1911.03975, 2019.
[180] S. K. Narang and A. Ortega, “Compact support biorthogonal wavelet filterbanks for arbitrary undirected graphs,” in IEEE Transactions on Signal Processing, vol. 61, no.19, October 2013, pp. 4673–4685.
[181] C. Wei, W. Wang, W. Yang, , and J. Liu, “Deep retinex decomposition for low-light enhancement,” arXiv preprint arXiv:1808.04560, 2018.
[182] K. Bredies and M. Holler, “A tgv-based framework for variational image decompression, zooming, and reconstruction. part i: Analytics,” SIAM Journal on Imaging Sciences, vol. 8, no. 4, pp. 2814–2850, 2015.
[183] G. Cheung, E. Magli, Y. Tanaka, and M. Ng, “Graph spectral image processing,” in Proceedings of the IEEE, vol. 106, no.5, May 2018, pp. 907–930.
[184] W.-T. Su, G. Cheung, and C.-W. Lin, “Graph Fourier transform with negative edges for depth image coding,” in IEEE International Conference on Image Processing, Beijing, China, September 2017. 150
[185] D. J. Jobson, Z.-u. Rahman, and G. A. Woodell, “Properties and performance of a center/ surround retinex,” IEEE Trans. Image Process., vol. 6, no. 3, pp. 451–462, 1997.
[186] ——, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process., vol. 6, no. 7, pp. 965–976, 1997.
[187] X. Fu, D. Zeng, Y. Huang, Y. Liao, X. Ding, and J. Paisley, “A fusion-based enhancing method for weakly illuminated images,” Signal Process., vol. 129, pp. 82–96, 2016.

 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *