帳號:guest(18.221.102.63)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):陳奕昇
作者(外文):Chen, Yi Sheng
論文名稱(中文):基於顯示性質的圖像編輯
論文名稱(外文):Attribute-Based Image Editing
指導教授(中文):陳煥宗
指導教授(外文):Chen, Hwann Tzong
口試委員(中文):賴尚宏
劉庭祿
口試委員(外文):Lai, Shang Hong
Liu, Tyng Luh
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:103062573
出版年(民國):105
畢業學年度:104
語文別:英文中文
論文頁數:31
中文關鍵詞:計算機視覺圖像表示類神經網路
外文關鍵詞:Computer VisionImage RepresentationNeural Network
相關次數:
  • 推薦推薦:0
  • 點閱點閱:459
  • 評分評分:*****
  • 下載下載:4
  • 收藏收藏:0
本論文提供一種新的互動式圖像編輯方法:根據「性質」來調整圖像。目前許多與類型神經網路相關的圖像編輯方法常要求使用者具備相當的圖像編輯熟悉程度與技巧,以修改指定區塊的範圍和內容。為了降低實作圖像編輯的複雜程度與加快執行效率,我們提供了一個構造簡單卻結構完整的圖像編輯系統,以該系統來有效地實作兩種精細的編輯方法。第一種是使用「區塊大小」的性質實作「邊界調整」,第二種則是使用「顏色」性質實作「色調調整」。
我們使用深層卷積類神經網路來切割圖像分成數個區塊,每個區塊都有上述提及的性質資訊。在「邊界調整」方法裡,我們使用「影像細縫裁減」方法調整特定區塊的大小。在「色調調整」方法中我們根據事先準備的圖像資料庫中取出其中一筆顏色性質的資料,根據其資料修改指定區塊的色調。實驗結果顯示我們提出的方法能夠快速且有效地完成圖像編輯,產生出逼真的修改成果。
This thesis presents a novel concept of interactive image editing: to edit an image by its attributes. We aim to alleviate the difficulty of conventional image editing
methods, which often require the user to be experienced and skillful in manipulating the regions and contents of an image. We introduce an attribute-based image editing
framework, and demonstrate two plausible editing tasks that can be effectively done using our framework. The first one is boundary adjustment with respect to the ‘semantic region size’ attribute and the second one is color transfer with respect to the ‘natural color’ attribute. We use the Fully Convolutional Network (FCN) model to segment a given image into several semantic regions and then characterize each region by the aforementioned attributes. For the boundary adjustment task, we adopt the seam carving method to adjust the semantic region size attribute of the selected region, and therefore the user is allowed to change the composition of the image interactively. For the color transfer task, we model the natural color attribute of the selected region by referring to the distribution of color attributes in the database. The experimental results show that our method is efficient and easy to generate visually pleasing editing results.
1 Introduction 8
2 Related Work 10
2.1 Patch-Based Image Editing 10
2.2 Convolutional Neural Networks 11
2.3 Image Repainting and Rendering by Attributes 11
3 Attribute-Based Image Editing 14
3.1 Semantic Segmentation and Attribute Extraction 14
3.2 Boundary Adjustment 16
3.2.1 Percentage Adjustment and Energy Map Modification 16
3.2.2 Seam Carving Scheme 17
3.3 Color Transfer 18
3.3.1 Modification of Color Tone 18
3.3.2 Color Sliders Design 20
4 Experimental Results 22
4.1 Results of Image Editing Tasks 22
4.2 User Interface Design 23
5 Conclusion 28
[1] S. Avidan and A. Shamir. Seam carving for content-aware image resizing. ACM Trans. Graph., 26(3):10, 2007.
[2] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman. Patchmatch: a randomized correspondence algorithm for structural image editing. ACM Trans.
Graph., 28(3), 2009.
[3] A. J. Champandard. Semantic style transfer and turning two-bit doodles into fine artworks. CoRR, abs/1603.01768, 2016.
[4] T. S. Cho, S. Avidan, and W. T. Freeman. The patch transform. IEEE Trans. Pattern Anal. Mach. Intell., 32(8):1489–1501, 2010.
[5] S. Darabi, E. Shechtman, C. Barnes, D. B. Goldman, and P. Sen. Image melding: combining inconsistent images using patch-based synthesis. ACM Trans. Graph., 31(4):82:1–82:10, 2012.
[6] A. Farhadi, I. Endres, D. Hoiem, and D. A. Forsyth. Describing objects by their attributes. In CVPR, pages 1778–1785. IEEE Computer Society, 2009.
[7] V. Ferrari and A. Zisserman. Learning visual attributes. In NIPS, pages 433–440. Curran Associates, Inc., 2007.
[8] L. A. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In NIPS, pages 262–270, 2015.
[9] A. Ghodrati, X. Jia, M. Pedersoli, and T. Tuytelaars. Towards automatic image editing: Learning to see another you. CoRR, abs/1511.08446, 2015.
[10] A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. Salesin. Image analogies. In SIGGRAPH, pages 327–340. ACM, 2001.
[11] D. J. Im, C. D. Kim, H. Jiang, and R. Memisevic. Generating images with recurrent adversarial networks. CoRR, abs/1602.05110, 2016.
[12] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM Multimedia, pages 675–678. ACM, 2014.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1106–1114, 2012.
[14] P. Laffont, Z. Ren, X. Tao, C. Qian, and J. Hays. Transient attributes for high-level understanding and editing of outdoor scenes. ACM Trans. Graph., 33(4):149:1–149:11, 2014.
[15] C. Liu, J. Yuen, and A. Torralba. SIFT flow: Dense correspondence across scenes and its applications. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):978–994, 2011.
[16] J. Liu, B. Kuipers, and S. Savarese. Recognizing human actions by attributes.
In CVPR, pages 3337–3344. IEEE Computer Society, 2011.
[17] J. Liu, J. Sun, and H. Shum. Paint selection. ACM Trans. Graph., 28(3), 2009.
[18] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431–3440. IEEE, 2015.
[19] D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
[20] P. P´ erez, M. Gangnet, and A. Blake. Poisson image editing. ACM Trans. Graph., 22(3):313–318, 2003.
[21] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley. Color transfer between images. IEEE Computer Graphics and Applications, 21(5):34–41, 2001.
[22] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In CVPR, pages 1–9. IEEE, 2015.
[23] Y.-H. Tsai, X. Shen, Z. Lin, K. Sunkavalli, and M.-H. Yang. Sky is not the limit: Semantic-aware sky replacement. ACM Transactions on Graphics (Proc. SIGGRAPH), 2016.
[24] G. Wang and D. A. Forsyth. Joint learning of visual attributes, object classes and visual saliency. In ICCV, pages 537–544. IEEE Computer Society, 2009.
[25] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition from abbey to zoo. In CVPR, pages 3485–3492. IEEE Computer Society, 2010.
[26] X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual attributes. CoRR, abs/1512.00570, 2015.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *