帳號:guest(18.224.59.181)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):林葶亞
作者(外文):Lin, Ting-Ya
論文名稱(中文):應用於分散式多視角子空間分群之改良式錨點表示法
論文名稱(外文):Improved Anchor-representation for Distributed Multi-view Subspace Clustering
指導教授(中文):洪樂文
指導教授(外文):Hong, Yao-Win Peter
口試委員(中文):溫朝凱
楊明勳
張正尚
學位類別:碩士
校院名稱:國立清華大學
系所名稱:通訊工程研究所
學號:107064519
出版年(民國):111
畢業學年度:110
語文別:英文
論文頁數:43
中文關鍵詞:多視角分群子空間分群分散式學習錨表示深度學習
外文關鍵詞:multi-view clusteringsubspace clusteringdistributed learninganchor-representationdeep learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:397
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
多視角分群旨在藉由探索多個視角中的互補資訊以發現數據的內在結構,本篇論文是對[1]的錨表示方法進行改進。考量到在現實生活的分散式場景下,不同視角資料被觀測與儲存在各個本地設備中,我們提出了一種分散式多視角子空間分群方法,該方法使用自動加權譜嵌入正規化器來確保多視角之間的一致性。與傳統的自表示方法不同,錨表示方法藉由將資料由錨點表示,在分散式場景下,降低傳輸成本和運算複雜度,為了防止在錨點較多的情況下,錨點發生自表示情形而降低分群結果,我們加入了對表示矩陣的限制。此外,為了處理非線性子空間問題,我們提出了一種基於深度神經網路的模型,該模型具有額外的錨點選取層,可以在構建錨字典時有效地自我調整。我們的方法通過在中央伺服器和本地設備之間交替運算的最佳化方式,在各個本地設備之間不進行資料交換的情况下,獲得聚類結果。我們也在多個公共資料及上展示了實驗結果以證明方法的可行性。
Multi-view clustering aims to discover the underlying structure of data by exploring complementary information from multiple views. This work is an improvement anchor-representation method for [1]. Considering the real-world distributed scenario where data of different views are observed and stored in their local devices, a distributed multi-view subspace clustering method is proposed with an auto-weighted spectral embedding regularizer to ensure the consistency across multiple views. Rather than conventional self-representation approach, anchor-representation method reduces communication costs and computational complexity under distributed environment by representing data with anchor points. To prevent the self-represented situation of anchors and stabilize the clustering results with a larger number of anchor points, we consider the constraint on representation matrix. Moreover, to deal with nonlinear subspace, a deep neural-based model is proposed with an additional anchor-extraction layer that builds the anchor dictionary efficiently and makes it adaptive. By optimizing between center node and local node in an alternative fashion, our method derives clustering results without explicit data exchange between each node. Experiments on several real-world public datasets demonstrate the effectiveness of the proposed method.
Abstract i
Contents ii
1 Introduction------------------------------------------------1
2 Related Works-----------------------------------------------5
2.1 Multi-view Subspace Clustering ---------------------------5
2.2 Deep Subspace Clustering----------------------------------6
2.3 Anchor-based Multi-view Clustering------------------------8
3 System Model -----------------------------------------------9
4 Improved Anchor-representation for the Linear Approach------11
4.1 Optimization Strategy ------------------------------------13
4.1.1 Optimization of C^(v) with given F, D and w^(v) --------13
4.1.2 Optimization of F given C^(v) and w^(v)-----------------16
4.2 Complexity -----------------------------------------------17
5 Improved Anchor-representation for Nonlinear Approach-------19
5.1 Network Architecture -------------------------------------19
5.2 Objective Function----------------------------------------21
5.3 Distributed Training Procedure----------------------------21
5.4 Complexity------------------------------------------------23
6 Experiments-------------------------------------------------24
6.1 Experimental Settings-------------------------------------24
6.1.1 Dataset Descriptions------------------------------------24
6.1.2 Comparison Methods--------------------------------------25
6.1.3 Evaluation Metrics -------------------------------------26
6.1.4 Implementation Details----------------------------------27
6.2 Experimental Results--------------------------------------28
6.2.1 Performance Evaluation of IMP-LDMSC---------------------28
6.2.2 Performance Evaluation of IMP-NDMSC --------------------30
7 Conclusion -------------------------------------------------33
Appendix A Optimization of cluster indicator in the central server--34
Appendix B Network setting -----------------------------------35
[1] P.-C. Chang, C.-Y. Cheng, and Y.-W. Peter Hong, “Distributed multi-view subspace clustering via auto-weighted spectral embedding,” in 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1–6, 2019.
[2] J. G. Jialu Liu, Chi Wang and J. Han, “Multiview clustering via joint nonnegative matrix factorization,” in SIAM Int. Conf. Data Mining, pp. 252–260, 2013.
[3] L. Zong, X. Zhang, L. Zhao, H. Yu, and Q. Z. Zhao, “Multi-view clustering via multi-manifold regularized non-negative matrix factorization,” Neural Networks, vol. 88, pp. 74–89, 2017.
[4] Z. G. Peng Luo, Jinye Peng and J. Fan, “Dual regularized multi-view non-negative matrix factorization for clustering,” Neurocomputing, vol. 294, pp. 1–11, 2018.
[5] S. Wei, J. Wang, G. Yu, C. Domeniconi, and X. Zhang, “Multi-view multiple clusterings using deep matrix factorization,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6348–6355, Apr. 2020.
[6] A. Kumar, P. Rai, and H. Daume, “Co-regularized multi-view spectral clustering,” in Advances in Neural Information Processing Systems (J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, eds.), vol. 24, Curran Associates, Inc., 2011.
[7] R. Xia, Y. Pan, L. Du, and J. Yin, “Robust multi-view spectral clustering via low-rank and sparse decomposition,” in Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pp. 2149–2155, 2014.
[8] Y. Wang, L. Wu, X. Lin, and J. Gao, “Multiview spectral clustering via structured low-rank matrix factorization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 10, pp. 4833–4843, 2018.
[9] X. Zhu, S. Zhang, W. He, R. Hu, C. Lei, and P. Zhu, “One-step multi-view spectral clustering,” IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 10, pp. 2022–2034, 2019.
[10] Z. Kang, G. Shi, S. Huang, W. Chen, X. Pu, J. T. Zhou, and Z. Xu, “Multiview spec- tral clustering via structured low-rank matrix factorization,” Knowledge-Based Systems, vol. 189, pp. 105–112, 2020.
[11] H. Gao, F. Nie, X. Li, and H. Huang, “Multi-view subspace clustering,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 4229–4246, 2015.
[12] X. Cao, C. Zhang, H. Fu, S. Liu, and H. Zhang, “Diversity-induced multi-view sub- space clustering,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 586–598, 2015.
[13] X. Wang, X. Guo, Z. Lei, C. Zhang, and S. Z. Li, “Exclusivity-consistency regularized multi-view subspace clustering,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 923–931, 2017.
[14] R. Vidal, “Subspace clustering,” IEEE Signal Processing Magazine, vol. 28, no. 2, pp. 52–68, 2011.
[15] G. Andrew, R. Arora, J. Bilmes, and K. Livescu, “Deep canonical correlation analysis,” in Proceedings of the 30th International Conference on Machine Learning (S. Dasgupta and D. McAllester, eds.), vol. 28 of Proceedings of Machine Learning Research, (Atlanta, Georgia, USA), pp. 1247–1255, PMLR, 17–19 Jun 2013.
[16] M. Abavisani and V. M. Patel, “Deep multimodal subspace clustering networks,” IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 6, pp. 1601–1614, 2018.
[17] P. Zhu, B. Hui, C. Zhang, D. Du, L. Wen, and Q. Hu, “Multi-view deep subspace clustering networks,” ArXiv, vol. abs/1908.01978, 2019.
[18] B. Cui, H. Yu, L. Zong, and Z. Cheng, “Self-guided deep multi-view subspace clustering network,” in IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6, 2021.
[19] X. Chen and D. Cai, “Large scale spectral clustering with landmark-based representation,” in Proceedings of Twenty-Fifth AAAI Conference on Artificial Intelligence, pp. 313–318, 2011.
[20] W. Liu, J. He, and S.-F. Chang, “Large graph construction for scalable semi-supervised learning,” in Proceedings of International Conference on Machine Learning (ICML), pp. 679–686, 2010.
[21] M. Wang, W. Fu, S. Hao, D. Tao, and X. Wu, “Scalable semi-supervised learning by efficient anchor graph regularization,” IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 7, pp. 1864–1877, 2016.
[22] J. Han, K. Xiong, and F. Nie, “Orthogonal and nonnegative graph reconstruction for large scale clustering.,” in Proceedings of 26th International Joint Conference on Artificial Intelligence (IJCAI), pp. 1809–1815, 2017.
[23] P. Ji, T. Zhang, H. Li, M. Salzmann, and I. Reid, “Deep subspace clustering networks,” in Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 24–33, 2017.
[24] E. Elhamifar and R. Vidal, “Sparse subspace clustering: Algorithm, theory, and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 35, no. 11, pp. 2765–2781, 2013.
[25] C.-G. Li and R. Vidal, “Structured sparse subspace clustering: A unified optimization framework,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 277–286, 2015.
[26] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013.
[27] S. Luo, C. Zhang, W. Zhang, and X. Cao, “Consistent and specific multi-view subspace clustering,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, Apr. 2018.
[28] C. Zhang, H. Fu, S. Liu, G. Liu, and X. Cao, “Low-rank tensor constrained multiview subspace clustering,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1582–1590, 2015.
[29] K. Chaudhuri, S. M. Kakade, K. Livescu, and K. Sridharan, “Multi-view clustering via canonical correlation analysis,” in Proceedings of International Conference on Machine Learning (ICML), pp. 129–136, 2009.
[30] P. Zhou, Y. Hou, and J. Feng, “Deep adversarial subspace clustering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1596–1604, 2018.
[31] J. Zhang, C.-G. Li, C. You, X. Qi, H. Zhang, J. Guo, and Z. Lin, “Self-supervised convolutional subspace clustering network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5473–5482, 2019.
[32] T. Zhang, P. Ji, M. Harandi, W. Huang, and H. Li, “Neural collaborative subspace clustering,” in Proceedings of International Conference on Machine Learning (ICML), pp. 7384–7393, 2019.
[33] M. Kheirandishfard, F. Zohrizadeh, and F. Kamangar, “Multi-level representation learning for deep subspace clustering,” in Proceedings of The IEEE Winter Conference on Applications of Computer Vision, pp. 2039–2048, 2020.
[34] W. Wang, R. Arora, K. Livescu, and J. Bilmes, “On deep multi-view representation learning,” in Proceedings of the 32nd International Conference on Machine Learning (F. Bach and D. Blei, eds.), vol. 37 of Proceedings of Machine Learning Research, (Lille, France), pp. 1083–1092, PMLR, 07–09 Jul 2015.
[35] A. Benton, H. Khayrallah, B. Gujral, D. A. Reisinger, S. Zhang, and R. Arora, “Deep generalized canonical correlation analysis,” arXiv preprint arXiv:1702.02519, 2017.
[36] C. Zhang, H. Fu, Q. Hu, X. Cao, Y. Xie, D. Tao, and D. Xu, “Generalized latent multi-view subspace clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 1, pp. 86–99, 2020.
[37] Q. Wang, J. Cheng, Q. Gao, G. Zhao, and L. Jiao, “Deep multi-view subspace clustering with unified and discriminative learning,” IEEE Transactions on Multimedia, vol. 23, pp. 3483–3493, 2021.
[38] Y. Li, F. Nie, H. Huang, and J. Huang, “Large-scale multi-view spectral clustering via bipartite graph,” in Proceedings of Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 2750–2756, 2015.
[39] X. Li, H. Zhang, R. Wang, and F. Nie, “Multi-view clustering: A scalable and parameter-free bipartite graph fusion method,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2020.
[40] Z. Kang, W. Zhou, Z. Zhao, J. Shao, M. Han, and Z. Xu, “Large-scale multi-view subspace clustering in linear time,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 4412–4419, Apr. 2020.
[41] F. Nie, J. Li, and X. Li, “Parameter-free auto-weighted multiple graph learning: A framework for multiview clustering and semi-supervised classification,” in Proceedings of 25th International Joint Conference on Artificial Intelligence (IJCAI), pp. 1881–1887, 2016.
[42] S. Boyd, P. Parikh, E. Chu, B. Peleato, and E. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2010.
[43] Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” in Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 612–620, 2011.
[44] S. Lloyd, “Least squares quantization in pcm,” IEEE Transactions on Information Theory, vol. 28, no. 2, pp. 129–137, 1982.
[45] D. Weinland, M. ̈Ozuysal, and P. Fua, “Making action recognition robust to occlusions and viewpoint changes,” in Computer Vision – ECCV 2010 (K. Daniilidis, P. Maragos, and N. Paragios, eds.), (Berlin, Heidelberg), pp. 635–648, Springer Berlin Heidelberg, 2010.
[46] K.-C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 27, no. 5, pp. 684–698, 2005.
[47] D. Greene and P. Cunningham, “Producing a unified graph representation from multiple social network views,” in Proceedings of the 5th Annual ACM Web Science Conference, pp. 118–121, 2013.
[48] L. Deng, “The mnist database of handwritten digit images for machine learning research [best of the web],” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 141–142, 2012.
[49] J. J. Hull, “A database for handwritten text recognition research,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 16, no. 5, pp. 550–554, 1994.
[50] H. W. Kuhn, “The hungarian method for the assignment problem,” Naval Research Logistics, vol. 2, no. 1-2, pp. 83–97, 2010.
[51] E. Achtert, S. Goldhofer, H.-P. Kriegel, E. Schubert, and A. Zimek, “Evaluation of clusterings – metrics and visual support,” in 2012 IEEE 28th International Conference on Data Engineering, pp. 1285–1288, 2012.
[52] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *