帳號:guest(18.225.55.223)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):郭珈妤
作者(外文):Guo, Karen
論文名稱(中文):藉由學習稀疏性表達特徵的方式偵測圖中視覺顯著區域
論文名稱(外文):Learning Sparse Feature Dictionary for Saliency Detection
指導教授(中文):陳煥宗
指導教授(外文):Chen, Hwann-Tzong
口試委員(中文):劉庭祿
賴尚宏
口試委員(外文):Liu, Tyng-Luh
Lai, Shang-Hong
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:100062516
出版年(民國):101
畢業學年度:100
語文別:英文
論文頁數:22
中文關鍵詞:顯著性稀疏式字典描述圖片方式注視區域學習方法
外文關鍵詞:SaliencySparse CodingDictionaryFeaturefixationLearning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:334
  • 評分評分:*****
  • 下載下載:4
  • 收藏收藏:0
圖中顯著性的偵測在計算機視覺研究領域中變得越來越流行。在這篇論文中,我們提出了一種新的方法來產生顯著性偵測的逼近結果圖。其基本思路是使用稀疏編碼係數當作是一種特徵來組成我們要的結果。我們的方法包括兩部分:訓練步驟和測試步驟。在訓練步驟,我們利用在圖片中找到的特徵以及利用人眼注視這張圖的資料來產生圖片相關的字典和如何從稀疏係數轉換成結果;在測試步驟中,給一張新的圖片,我們可以藉由基於特徵的字典得到其相應的稀疏編碼,然後生成結果。我們使用洗牌式AUC以及兩個圖片資料庫來評估我們的研究結果並證明我們的方法可以利用稀疏係數來學習和產生顯著性偵測的結果圖。
Saliency detection becomes more and more popular in computer vision research field. In this thesis we present a new method to generate the saliency map. The basic idea
is to use the sparse coding coefficients as features and find a way to reconstruct the sparse features into a saliency map. Our method consists of two parts: training step and testing step. In the training step, we use the features generated from images and the fixation values from ground-truth fixation map to train the feature-based
dictionary for the sparse coding and the fixation-based dictionary for converting the sparse coding to a saliency map. In the test step, given a new image, we can get
its corresponding sparse coding from the feature-based dictionary and then generate the result. We evaluate our results on two datasets with the shued AUC score and demonstrate that our method gives an efficient sparse coding learning and combination for saliency detection.
1 Introduction 6
2 Saliency from Sparse Coding 9
2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Dictionary Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Computing Local and Global Features . . . . . . . . . . . . . . . . . 11
2.3.1 Dense SIFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.2 Global Color Distribution . . . . . . . . . . . . . . . . . . . . 12
2.3.3 Global Gabor Filter Response . . . . . . . . . . . . . . . . . . 12
3 Experiments 14
3.1 Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3.1 Pridicting Saliency . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3.2 Features Selection . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3.3 Di erent Weights between Features and Fixation Values . . . 17
3.3.4 Training Dictionary and Combination Explicitly . . . . . . . . 17
4 Conclusion and Future Work 20
[1] Codruta Orniana Ancuti, Cosmin Ancuti, and Philippe Bekaert. Enhancing by
saliency-guided decolorization. In CVPR, pages 257{264, 2011.
[2] Shai Avidan and Ariel Shamir. Seam carving for content-aware image resizing.
ACM Trans. Graph., 26(3):10, 2007.
[3] Ali Borji and Laurent Itti. Exploiting local and global patch rarities for saliency
detection. In CVPR, 2012.
[4] Neil D. B. Bruce and John K. Tsotsos. Saliency based on information maximization.
In NIPS, 2005.
[5] Kai-Yueh Chang, Tyng-Luh Liu, Hwann-Tzong Chen, and Shang-Hong Lai. Fusing
generic objectness and visual saliency for salient object detection. In ICCV,
pages 914{921, 2011.
[6] Kai-Yueh Chang, Tyng-Luh Liu, and Shang-Hong Lai. From co-saliency to cosegmentation:
An ecient and fully unsupervised energy minimization model.
In CVPR, pages 2129{2136, 2011.
[7] Jonathan Harel, Christof Koch, and Pietro Perona. Graph-based visual saliency.
In NIPS, pages 545{552, 2006.
[8] Laurent Itti. Automatic foveation for video compression using a neurobiological
model of visual attention. IEEE Transactions on Image Processing, 13(10):1304{
1318, 2004.
21
[9] Laurent Itti, Christof Koch, and Ernst Niebur. A model of saliency-based visual
attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell.,
20(11):1254{1259, 1998.
[10] Tilke Judd, Krista A. Ehinger, Fredo Durand, and Antonio Torralba. Learning
to predict where humans look. In ICCV, pages 2106{2113, 2009.
[11] David G. Lowe. Distinctive image features from scale-invariant keypoints. In-
ternational Journal of Computer Vision, 60(2):91{110, 2004.
[12] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online learning
for matrix factorization and sparse coding. Journal of Machine Learning
Research, 11:19{60, 2010.
[13] Federico Perazzi, Philipp Krahenbuhl, Yael Pritch, and Alexander Hornung.
Saliency lters: Contrast based ltering for salient region detection. In CVPR,
2012.
[14] Jingdong Wang, Long Quan, Jian Sun, Xiaoou Tang, and Heung-Yeung Shum.
Picture collage. In CVPR (1), pages 347{354, 2006.
[15] Meng Wang, Janusz Konrad, Prakash Ishwar, Kevin Jing, and Henry A. Rowley.
Image saliency: From intrinsic to extrinsic context. In CVPR, pages 417{424,
2011.
[16] Jimei Yang and Ming-Hsuan Yang. Top-down visual saliency via joint crf and
dictionary learning. In CVPR, 2012.
[17] L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. Cottrell. SUN: A
Bayesian framework for saliency using natural statistics. Journal of Vision, 8:32{
32, 2008.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *