|
[1] Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for recommender systems,” Computer, vol. 42, pp. 30~37, August. 2009. [2] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky, “Tensor decompositions for learning latent variable models,” Journal of Machine Learning Research, vol. 15, pp. 2773-2832, 2014. [3] D. Nion, K. N. Mokios, N. D. Sidiropoulos, and A. Potamianos, “Batch and adaptive PARAFAC-based blind separation of convolutive speech mixtures,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, pp. 1193-1207, August. 2010. [4] Y. Zhou, D. Wilkinson, R. Schreiber, and R. Pan, “Large-scale parallel collaborative filtering for the netflix prize,” in Proceedings of Algorithmic Aspects in Information and Management (R. Fleischer and J. Xu, eds.), (Berlin, Heidelberg), pp. 337-348, Springer,2008. [5] R. Gemulla, E. Nijkamp, P. J. Haas, and Y. Sismanis, “Large-scale matrix factorization with distributed stochastic gradient descent,” in Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp. 69-77,2011. [6] H.-F. Yu, C.-J. Hsieh, S. Si, and I. S. Dhillon, “Parallel matrix factorization for recommender systems," Knowledge and Information Systems, vol. 41, pp. 793-819, December.2014. [7] Y. Qian, C. Tan, N. Mamoulis, and D. W. Cheung, “Dsanls: Accelerating distributed nonnegative matrix factorization via sketching,” in Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM), pp. 450-458, 2018. [8] A. P. Liavas and N. D. Sidiropoulos, “Parallel algorithms for constrained tensor factorization via alternating direction method of multipliers,” IEEE Transactions on Signal Processing, vol. 63, pp. 5450-5463, October. 2015. [9] K. Huang, N. D. Sidiropoulos, and A. P. Liavas, “A flexible and efficient algorithmic framework for constrained matrix and tensor factorization,” IEEE Transactions on Signal Processing, vol. 64, pp. 5052{5065, October. 2016. [10] S. Zhu, M. Hong, and B. Chen, “Quantized consensus admm for multi-agent distributed optimization," in Proceedings of the IEEE International Conference on Acoustics,Speech and Signal Processing (ICASSP), pp. 4134-4138, March 2016. [11] S. Zhu and B. Chen, “Distributed average consensus with bounded quantization,” in Proceedings of the IEEE 17th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 1-6, July 2016. [12] J. Matamoros, S. M. Fosson, E. Magli, and C. Ant_on-Haro, “Distributed admm for in-network reconstruction of sparse signals with innovations,” IEEE Transactions on Signal Processing, vol. 1, no. 4, pp. 225-234, 2015. [13] Y. Liu, W. Xu, G. Wu, Z. Tian, and Q. Ling, “Communication-censored admm for decentralized consensus optimization,” IEEE Transactions on Signal Processing, vol. 67, pp. 2565-2579, May 2019. [14] W. Li, T. Chen, L. Li, and Q. Ling, “Communication-censored distributed stochastic gradient descent,” arXiv preprint arXiv:1909.03631, 2019. [15] P. Xu, Z. Tian, Z. Zhang, and Y. Wang, “Coke: Communication-censored kernel learning via random features,” in 2019 IEEE Data Science Workshop (DSW), pp. 32-36, June 2019. [16] J. Hua, C. Xia, and S. Zhong, “Differentially private matrix factorization," in Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15, pp. 1763-1770, 2015. [17] F. Zhang, V. Lee, and K.-K. Choo, “Jo-dpmf: Differentially private matrix factorization learning through joint optimization," Information Sciences, vol. 467, 07 2018. [18] E. Xue, R. Guo, F. Zhang, L. Wang, X. Zhang, and G. Qu, \Distributed differentially private matrix factorization based on admm," in 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), pp. 2502{2507, Aug 2019. [19] Z. Huang, R. Hu, Y. Guo, E. Chan-Tin, and Y. Gong, “Dp-admm: Admm-based distributed learning with differential privacy," IEEE Transactions on Information Forensics and Security, vol. 15, pp. 1002-1012, 2020. [20] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends® in Machine Learning, vol. 3, no. 1, pp. 1{122, 2011. [21] W. Shi, Q. Ling, K. Yuan, G. Wu, and W. Yin, “On the linear convergence of the admm in decentralized consensus optimization,” IEEE Transactions on Signal Processing, vol. 62, pp. 1750{1761, April 2014. [22] S.Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan,”Adaptive federated learning in resource constrained edge computing systems,” IEEE Journal on Selected Areas in Communications, vol. 37, pp. 1205-1221, June 2019. [23] F. M. Harper and J. A. Konstan, “The movielens datasets: History and context,” ACM Trans. Interact. Intell. Syst., vol. 5, no. 4, pp. 19:1-19:19, 2015. [24] “Yahoo!: Webscope movie data set (version 1.0), http://research.yahoo.com/,” [25] F. Li, B. Wu, L. Xu, C. Shi, and J. Shi, “A fast distributed stochastic gradient descent algorithm for matrix factorization,” in BigMine, vol. 36 of JMLR Workshop and Conference Proceedings, pp. 77-87, JMLR.org, 2014.
|