帳號:guest(3.148.112.17)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):陳勇安
作者(外文):Chen, Yung-An
論文名稱(中文):基於深度學習之自動上色瑕疵檢測
論文名稱(外文):Automatic Colorization Defects Inspection using Deep Learning Network
指導教授(中文):朱宏國
指導教授(外文):Chu, Hung-Kuo
口試委員(中文):王昱舜
姚智原
口試委員(外文):Wang, Yu-Shuen
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系所
學號:105062578
出版年(民國):108
畢業學年度:107
語文別:中文
論文頁數:33
中文關鍵詞:自動上色瑕疵檢測顏色溢出瑕疵檢測
外文關鍵詞:Defect inspectionColor bleeding detectionDeep learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:482
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
隨著深度學習的逐漸成熟,在圖像生成方面的相關技術如:遷移式學習(Domain transfer learning)、生成對抗式網路(Generative adversarial network)與監督式學習(Supervised learning),應用於圖像處理(Image processing)、藝術風格轉換(Style transfer)、自動化上色(Automatic colorization)等研究相當熱絡。然而,當前的研究對於圖像生成後產生的瑕疵問題少有深入探討,舉例來說:在自動化上色的研究,其目標為對灰階圖像進行彩色化的工作。然而,就目前的研究結果來觀察會發現大多數的上色品質並不理想,通常會觀察到三種顯著的缺陷:顏色溢出、顏色褪色和顏色不一致。特別是就顏色溢出的問題而言,在藝術風格轉換的研究領域也同樣面臨了這項瑕疵的不良影響。有鑑於此,本研究提出了一個適用於上色瑕疵檢測的深度學習模型,歸納顏色溢出的發生規則,設計顏色溢出的圖像生成演算法,並建立深度學習的訓練資料集及測試資料集。模型架構藉由深度卷積對待檢測圖片進行特徵萃取後,透過反卷積來預測顏色溢出的發生位置。以期能藉由顏色溢出的瑕疵預測,反向回饋上色模型以進行成果優化,提升上色品質。
With the development of deep learning technology, there are already various works applying domain transfer learning, generative adversarial network and supervised learning to image processing, style transfer and automatic colorization. However, only few research has been conducted on the defects caused by deep image generation methods. For example, the objective of automatic colorization is to colorize the grayscale image, but most of the results are not good enough as we expected. There are usually three significant defects: color bleeding, color vanishing and color inconsistency. Among these three problems, the color bleeding is the most frequent issue. Motivated by this observation, we propose in this thesis a deep learning model for color bleeding detection. We organize the conditions for the occurrence of color bleeding to design color bleeding generation algorithms and produce the datasets for both training and testing. We use convolutional layers to extract features from the input image and deconvolutional layers to predict where color bleeding occurs. With this information, we may optimize the automatic colorization by the predicted loss from our color bleeding detection model.
中文摘要i
Abstract ii
目錄iii
圖目錄v
1 緒論1
2 相關研究3
3 訓練資料生成5
3.1 顏色溢出規則. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.2 顏色溢出圖片生成演算法. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4 模型架構7
4.1 模型設計. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.1.1 橘色卷積單元. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.1.2 藍色卷積單元. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1.3 損失函數. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
5 實驗9
5.1 瑕疵檢測模型. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.1.1 模型測試結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.1.2 模型執行效能. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.1.3 模型測試資料驗證結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.2 逆向工程推測顏色溢出區域. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.2.1 逆向工程推測顏色溢出-實驗結果. . . . . . . . . . . . . . . . . . . . . . . 13
5.3 使用者調查. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.3.1 測試資料準備. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.3.2 使用者調查流程. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.3.3 使用者調查結果-票數統計. . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3.4 使用者調查結果-沒有顏色溢出的票數與模型預測數值統計. . . . . . . . . 21
5.3.5 使用者調查結果-可能有顏色溢出的票數與模型預測數值統計. . . . . . . . 21
5.3.6 使用者調查結果-全部票數與模型預測數值統計. . . . . . . . . . . . . . . 21
5.3.7 使用者調查結果-全部票數與預測數值ANOVA 分析. . . . . . . . . . . . 21
5.3.8 使用者調查結果-投票結果. . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.3.9 使用者調查結果-投票結果與模型預測不符案例-1 . . . . . . . . . . . . . . 25
5.3.10 使用者調查結果-投票結果與模型預測不符案例-2 . . . . . . . . . . . . . . 25
6 結論27
6.1 未來工作. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.2 結論. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
A 附錄29
A.1 測試資料顏色溢出預測結果. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Bibliography 32
[1] Xianghua Xie. A review of recent advances in surface defect detection using texture analysis
techniques. ELCVIA Electronic Letters on Computer Vision and Image Analysis, 7(3):1–22,
2008. ISSN 1577-5097. URL https://elcvia.cvc.uab.es/article/view/v7-n3-xie.
[2] Mahdieh Ghazvini, S A. Monadjemi, Naser Movahhedinia, and Kamal Jamshidi. Defect
detection of tiles using 2d-wavelet transform and statistical features. M.Ghazvini, S. A.
Monadjemi, N. Movahhedinia, and K. JamshidiTWorld Academy of Science, Engineering
and Technology, 49, 01 2009.
[3] W. Polzleitner. Defect detection on wooden surface using gabor filters with evolutionary
algorithm design. In IJCNN’01. International Joint Conference on Neural Networks.
Proceedings (Cat. No.01CH37222), volume 1, pages 750–755 vol.1, July 2001. doi:
10.1109/IJCNN.2001.939118.
[4] Farzaneh Salimian Najafabadi and H. Pourghassem. Corner defect detection based on dot
product in ceramic tile images. In 2011 IEEE 7th International Colloquium on Signal
Processing and its Applications, pages 293–297, March 2011. doi: 10.1109/CSPA.2011.
5759890.
[5] Anirban Mukherjee, Subhasis Chaudhuri, Pranab K. Dutta, Siddhartha Sen, and Amit
Patra. An object-based coding scheme for frontal surface of defective fluted ingot.
ISA Transactions, 45(1):1 – 8, 2006. ISSN 0019-0578. doi: https://doi.org/10.1016/
S0019-0578(07)60060-3. URL http://www.sciencedirect.com/science/article/pii/
S0019057807600603.
[6] D. Racki, D. Tomazevic, and D. Skocaj. A compact convolutional neural network for
textured surface anomaly detection. In 2018 IEEE Winter Conference on Applications
of Computer Vision (WACV), pages 1331–1339, March 2018. doi: 10.1109/WACV.2018.
00150.
[7] Krzysztof Wolski, Daniele Giunchi, Nanyang Ye, Piotr Didyk, Karol Myszkowski, Rados
law Mantiuk, Hans-Peter Seidel, Anthony Steed, and Rafa l K. Mantiuk. Dataset and
metrics for predicting local visible differences. ACM Trans. Graph., 37(5):172:1–172:14, November 2018. ISSN 0730-0301. doi: 10.1145/3196493. URL http://doi.acm.org/10.
1145/3196493.
[8] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In ECCV,
2016.
[9] Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Learning representations for
automatic colorization. In European Conference on Computer Vision (ECCV), 2016.
[10] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Let there be Color!: Joint Endto-
end Learning of Global and Local Image Priors for Automatic Image Colorization with
Simultaneous Classification. ACM Transactions on Graphics (Proc. of SIGGRAPH 2016),
35(4):110:1–110:11, 2016.
[11] Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S Lin, Tianhe Yu, and
Alexei A Efros. Real-time user-guided image colorization with learned deep priors. ACM
Transactions on Graphics (TOG), 9(4), 2017.
[12] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale
Hierarchical Image Database. In CVPR09, 2009.
[13] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for
biomedical image segmentation. CoRR, abs/1505.04597, 2015. URL http://arxiv.org/
abs/1505.04597.
[14] N. Murray, L. Marchesotti, and F. Perronnin. Ava: A large-scale database for aesthetic
visual analysis. In 2012 IEEE Conference on Computer Vision and Pattern Recognition,
pages 2408–2415, June 2012. doi: 10.1109/CVPR.2012.6247954.
(此全文限內部瀏覽)
電子全文
中英文摘要
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *