帳號:guest(3.149.251.104)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):鄭澄遠
作者(外文):Cheng, Cheng-Yuan
論文名稱(中文):具有線性與非線性自表徵的分散式多視圖子空間聚類
論文名稱(外文):Distributed Multi-View Subspace Clustering with Linear and Nonlinear Self-Representation
指導教授(中文):洪樂文
指導教授(外文):Hong, Yao-Win Peter
口試委員(中文):陳祝嵩
王奕翔
口試委員(外文):Chen, Chu-Song
Wang, I-Hsiang
學位類別:碩士
校院名稱:國立清華大學
系所名稱:通訊工程研究所
學號:107064542
出版年(民國):110
畢業學年度:109
語文別:英文
論文頁數:60
中文關鍵詞:多視角分群子空間分群分散式學習譜嵌入自表示錨表示深度學習
外文關鍵詞:Multi-view ClusteringSubspace ClusteringDistributed LearningSpectral EmbeddingSelf-representationAnchor RepresentationDeep Learning
相關次數:
  • 推薦推薦:0
  • 點閱點閱:161
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
多視角子空間分群旨在通過融合多視角之間的互補資訊來探索資料的內在結構,其中多視角可視為資料的不同特徵提取或是對物件不同角度的觀察。本篇論文基於不同視角的數據儲存在多個邊緣設備上的情況,研究了分散式的多視角子空間分群問題,並且我們更專注於學習基於分群的資料表示向量。我們也採用了自動加權譜分群嵌入的子空間分群方法以確保各個本地設備的分群結果一致。在一個主從分散式結構上,邊緣設備可以利用自己視角的資料來執行單視角稀疏子空間分群,且中心節點可以通過譜分群嵌入來進行各個本地設備的協調,以獲得全局一致性的分群結果。此外,我們也結合了深度學習與錨表示技術,前者使模型更能適合於不符合線性子空間的資料集,後者則可以達到更高的計算與傳輸效率。在錨表示下我們也考量到了自適應錨表示以求錨點向量能更適合在潛在空間達成表示的作用。我們使用交替最佳化的方法來進行最佳化,其中對於邊緣設備的表示矩陣以及全局的分群指標矩陣交替進行最佳化直到收斂。我們的方法可在較少資訊交換量且確保資料隱密性的情況下達到可比較的準確率。我們也在幾個公共資料集上展示了實驗的結果以證明方法的可行性。
Multi-view subspace clustering aims to discover the inherent structure by fusing multi-view complementary information. This work examines a distributed multi-view clustering problem, where the data associated with different views is stored across multiple edge devices and we focused on learning representations for clustering. A subspace clustering method is adopted using auto-weighted spectral embedding to ensure that the clustering solution is consistent among local edge devices. A master-slave architecture is adopted where clustering solutions are computed separately at the edge devices based on their locally available single-view datasets but are coordinated by spectral clustering regularization term at the central node. Moreover, we incorporate deep learning and a anchor-representation technique to make the model more suitable for the data which not conform to the linear subspace and more efficiency. Adaptive anchor selection is also considered to learn better anchor vectors which are more fitting to the anchor-representation in latent space. The optimization is performed using an alternating optimization approach, where the local representation and the global cluster indicator matrices are optimized in turn until convergence. We also show the experimental results on several public datasets, which demonstrate the effectiveness of the proposed method.
Abstract i
Contents ii
1 Introduction 1
2 Background and Related Works 5
2.1 Linear Multi-view Subspace Clustering . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Non-linear Subspace Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Anchor Graph-based Multi-view Clustering . . . . . . . . . . . . . . . . . . . . . 8
3 Preliminaries and System Model 10
4 Distributed Multi-View Subspace Clustering with Linear Self-Representation 14
4.1 Auto-weighting of the Regularization at Local Nodes . . . . . . . . . . . . . . . . 15 4.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2.1 Optimization of C (v) with given F` , D` and w (v) `+1 . . . . . . . . . . . . . . 17
4.2.2 Optimization of F given C (v) `+1 and w (v) `+1 . . . . . . . . . . . . . . . . . . . 19
4.3 Anchor Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4 Convergence Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5 Consider about Normalized Spectral Embedding . . . . . . . . . . . . . . . . . . . 22
5 Distributed Deep Multi-View Subspace Clustering with Neural-Based Representation 25
5.1 Distributed Training Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.2 Adaptive Anchor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6 Experiments 30
6.1 Experimental Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.2 Comparison Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.3 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.4 Evaluation Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.5 Experiment Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.5.1 Results on Proposed Distributed Self-Representation Methods . . . . . . . 34
6.5.2 Results on Proposed Distributed Anchor-Based Methods . . . . . . . . . . 37
6.5.3 Performance Downgrade of LDMSC . . . . . . . . . . . . . . . . . . . . 39
6.5.4 Communication Overhead Under The Distributed Scenario . . . . . . . . . 40
6.5.5 Parameter Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7 Conclusion 46
Appendix A Parameter Setting of Experiments 47
Appendix B Structure Setting of Neural Networks 52
Bibliography 54
[1] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu, “An efficient k-means clustering algorithm: Analysis and implementation,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 24, no. 7, pp. 881–892, 2002.
[2] H. Wang, F. Nie, and H. Huang, “Multi-view clustering and feature learning via structured sparsity,” in Proceedings of International Conference on Machine Learning (ICML), pp. 352– 360, 2013.
[3] A. Y. Ng, M. I. Jordan, and Y. Weiss, “On spectral clustering: Analysis and an algorithm,” in Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 849–856, 2002.
[4] U. Von Luxburg, “A tutorial on spectral clustering,” Statistics and computing, vol. 17, no. 4, pp. 395–416, 2007.
[5] E. Elhamifar and R. Vidal, “Sparse subspace clustering: Algorithm, theory, and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 35, no. 11, pp. 2765–2781, 2013.
[6] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma, “Robust recovery of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 35, no. 1, pp. 171–184, 2012.
[7] J. Xie, R. Girshick, and A. Farhadi, “Unsupervised deep embedding for clustering analysis,” in Proceedings of International Conference on Machine Learning (ICML), pp. 478–487, 2016.
[8] M. Abavisani and V. M. Patel, “Deep multimodal subspace clustering networks,” IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 6, pp. 1601–1614, 2018.
[9] V. M. Patel and R. Vidal, “Kernel sparse subspace clustering,” in IEEE International Conference on Image Processing (ICIP), pp. 2849–2853, IEEE, 2014.
[10] S. Xiao, M. Tan, D. Xu, and Z. Y. Dong, “Robust kernel low-rank representation,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 11, pp. 2268–2281, 2015.
[11] P. Ji, T. Zhang, H. Li, M. Salzmann, and I. Reid, “Deep subspace clustering networks,” in Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 24–33, 2017.
[12] C. Xu, D. Tao, and C. Xu, “A survey on multi-view learning,” arXiv preprint
arXiv:1304.5634, 2013.
[13] C.-G. Li and R. Vidal, “Structured sparse subspace clustering: A unified optimization framework,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 277–286, 2015.
[14] R. Xia, Y. Pan, L. Du, and J. Yin, “Robust multi-view spectral clustering via low-rank and sparse decomposition,” in Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pp. 2149–2155, 2014.
[15] X. Cao, C. Zhang, H. Fu, S. Liu, and H. Zhang, “Diversity-induced multi-view subspace clustering,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 586–598, 2015.
[16] X. Wang, X. Guo, Z. Lei, C. Zhang, and S. Z. Li, “Exclusivity-consistency regularized multi-view subspace clustering,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 923–931, 2017.
[17] C. Zhang, H. Fu, Q. Hu, X. Cao, Y. Xie, D. Tao, and D. Xu, “Generalized latent multi-
view subspace clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 42, no. 1, pp. 86–99, 2018.
[18] R. Vidal, “Subspace clustering,” IEEE Signal Processing Magazine, vol. 28, no. 2, pp. 52–68, 2011.
[19] K. Chaudhuri, S. M. Kakade, K. Livescu, and K. Sridharan, “Multi-view clustering via canonical correlation analysis,” in Proceedings of International Conference on Machine Learning (ICML), pp. 129–136, 2009.
[20] Y. Xie, J. Liu, Y. Qu, D. Tao, W. Zhang, L. Dai, and L. Ma, “Robust kernelized multiview self-representation for subspace clustering,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2020.
[21] A. Gretton, O. Bousquet, A. Smola, and B. Scholkopf, “Measuring statistical dependence” with hilbert-schmidt norms,” in Proceedings of International Conference on Algorithmic Learning Theory, pp. 63–77, Springer, 2005.
[22] X. Guo, “Exclusivity regularized machine,” arXiv preprint arXiv:1603.08318, 2016.
[23] C. Zhang, H. Fu, S. Liu, G. Liu, and X. Cao, “Low-rank tensor constrained multiview subspace clustering,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1582–1590, 2015.
[24] Y. Xie, D. Tao, W. Zhang, Y. Liu, L. Zhang, and Y. Qu, “On unifying multi-view self-
representations for clustering by tensor multi-rank minimization,” International Journal of Computer Vision, vol. 126, no. 11, pp. 1157–1179, 2018.
[25] V. M. Patel, H. Van Nguyen, and R. Vidal, “Latent space sparse subspace clustering,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 225–232, 2013.
[26] X. Peng, S. Xiao, J. Feng, W.-Y. Yau, and Z. Yi, “Deep subspace clustering with sparsity prior,” in Proceedings of 25th International Joint Conference on Artificial Intelligence (IJCAI), pp. 1925–1931, 2016.
[27] X. Peng, J. Feng, S. Xiao, W.-Y. Yau, J. T. Zhou, and S. Yang, “Structured autoencoders for subspace clustering,” IEEE Transactions on Image Processing, vol. 27, no. 10, pp. 5076–5086, 2018.
[28] P. Zhou, Y. Hou, and J. Feng, “Deep adversarial subspace clustering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1596–1604, 2018.
[29] J. Zhang, C.-G. Li, C. You, X. Qi, H. Zhang, J. Guo, and Z. Lin, “Self-supervised convolutional subspace clustering network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5473–5482, 2019.
[30] T. Zhang, P. Ji, M. Harandi, W. Huang, and H. Li, “Neural collaborative subspace clustering,” in Proceedings of International Conference on Machine Learning (ICML), pp. 7384–7393, 2019.
[31] L. Zhou, B. Xiao, X. Liu, J. Zhou, E. R. Hancock, et al., “Latent distribution preserving deep subspace clustering,” in Proceedings of 28th International Joint Conference on Artificial Intelligence (IJCAI), pp. 4440–4446, 2019.
[32] M. Kheirandishfard, F. Zohrizadeh, and F. Kamangar, “Multi-level representation learning for deep subspace clustering,” in Proceedings of The IEEE Winter Conference on Applications of Computer Vision, pp. 2039–2048, 2020.
[33] X. Chen and D. Cai, “Large scale spectral clustering with landmark-based representation,” in Proceedings of Twenty-Fifth AAAI Conference on Artificial Intelligence, pp. 313–318, 2011.
[34] Y. Li, F. Nie, H. Huang, and J. Huang, “Large-scale multi-view spectral clustering via bipartite graph,” in Proceedings of Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 2750–2756, 2015.
[35] Z. Zhang, L. Liu, F. Shen, H. T. Shen, and L. Shao, “Binary multi-view clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 41, no. 7, pp. 1774–1782, 2018.
[36] Z. Kang, W. Zhou, Z. Zhao, J. Shao, M. Han, and Z. Xu, “Large-scale multi-view subspace clustering in linear time,” in Proceedings of Thirty-Fourth AAAI Conference on Artificial Intelligence, pp. 4412–4419, 2020.
[37] X. Li, H. Zhang, R. Wang, and F. Nie, “Multi-view clustering: A scalable and parameter-free bipartite graph fusion method,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2020.
[38] E. Elhamifar and R. Vidal, “Sparse subspace clustering,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2790–2797, IEEE, 2009.
[39] L. Hagen and A. B. Kahng, “New spectral methods for ratio cut partitioning and clustering,” IEEE transactions on computer-aided design of integrated circuits and systems, vol. 11, no. 9, pp. 1074–1085, 1992.
[40] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 8, pp. 888–905, 2000.
[41] F. Nie, J. Li, and X. Li, “Parameter-free auto-weighted multiple graph learning: A framework for multiview clustering and semi-supervised classification,” in Proceedings of 25th International Joint Conference on Artificial Intelligence (IJCAI), pp. 1881–1887, 2016.
[42] Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” in Proceedings of Advances in Neural Information Processing Systems (NIPS), pp. 612–620, 2011.
[43] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, pp. 249–256, 2010.
[44] J. Wang, X. Nie, Y. Xia, Y. Wu, and S.-C. Zhu, “Cross-view action modeling, learning and recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2649–2656, 2014.
[45] K.-C. Lee, J. Ho, and D. J. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 27, no. 5, pp. 684–698, 2005.
[46] S. Shekhar, V. M. Patel, N. M. Nasrabadi, and R. Chellappa, “Joint sparse representation for robust multimodal biometrics recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 36, no. 1, pp. 113–126, 2013.
[47] D. Greene and P. Cunningham, “Producing a unified graph representation from multiple social network views,” in Proceedings of the 5th Annual ACM Web Science Conference, pp. 118–121, 2013.
[48] J. J. Hull, “A database for handwritten text recognition research,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 16, no. 5, pp. 550–554, 1994.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *