|
References [1] T. Li et al., “Federated learning: Challenges, methods, and future directions,” IEEE Signal Process. Mag., vol. 37, no. 3, pp. 50–60, 2020. [2] B. McMahan et al., “Communication-efficient learning of deep networks from decentral- ized data,” in Proc. AISTATS, 2017. [3] Y. Zhao et al., “Federated learning with non-iid data,” arXiv preprint arXiv:1806.00582, 2018. [4] J. Konečný et al., “Federated learning: Strategies for improving communication effi- ciency,” in Proc. NeurIPS Workshop on PMPML, 2016. [5] H. Seo et al., “Federated knowledge distillation,” Machine Learning and Wireless Com- munications, pp. 457–485, 2022. [6] I. Hegedűs, G. Danner, and M. Jelasity, “Gossip learning as a decentralized alternative to federated learning,” in Proc. IFIP DAIS, 2019. [7] A. Ghosh, J. Chung, D. Yin, and K. Ramchandran, “An efficient framework for clustered federated learning,” in Proc. NeurIPS, 2020. [8] F. Sattler et al., “Clustered federated learning: Model-agnostic distributed multitask op- timization under privacy constraints,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, pp. 3710–3722, 2020. [9] C. Briggs et al., “Federated learning with hierarchical clustering of local updates to im- prove training on non-iid data,” in Proc. IJCNN, 2020. [10] G. Long et al., “Multi-center federated learning: clients clustering for better personaliza- tion,” World Wide Web, vol. 26, pp. 481–500, 2023. [11] A. Koloskova, S. Stich, and M. Jaggi, “Decentralized stochastic optimization and gossip algorithms with compressed communication,” in ICML, 2019. [12] A. Koloskova, T. Lin, S. U. Stich, and M. Jaggi, “Decentralized deep learning with arbi- trary communication compression,” in ICLR, 2020. [13] N. Yoshida et al., “Hybrid-fl for wireless networks: Cooperative learning mechanism us- ing non-iid data,” in 2020 IEEE International Conference on Communications, ICC 2020, Dublin, Ireland, June 7-11, 2020, pp. 1–7, IEEE, 2020. [14] H. Wang, Z. Kaplan, D. Niu, and B. Li, “Optimizing federated learning on Non-IID data with reinforcement learning,” in IEEE INFOCOM, 2020 [15] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated op- timization in heterogeneous networks,” Proceedings of Machine Learning and Systems, vol. 2, pp. 429–450, 2020. [16] S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, “Scaffold: Stochastic controlled averaging for federated learning,” in International Conference on Machine Learning, pp. 5132–5143, PMLR, 2020. [17] H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, and Y. Khazaeni, “Federated learning with matched averaging,” arXiv preprint arXiv:2002.06440, 2020. [18] M. Zhang, K. Sapra, S. Fidler, S. Yeung, and J. M. Alvarez, “Personalized federated learn- ing with first order model optimization,” arXiv preprint arXiv:2012.08565, 2020. [19] P.-N. Tan et al., Introduction to data mining. Pearson Edu. India, 2016. [20] Z. Wang et al., “Efficient ring-topology decentralized federated learning with deep gen- erative models for medical data in ehealthcare systems,” Electronics, vol. 11, p. 1548, 2022. [21] S. P. Sturluson et al., “Fedrad: Federated robust adaptive distillation,” arXiv preprint arXiv:2112.01405, 2021. [22] J. A. Hartigan and M. A. Wong, “Algorithm as 136: A k-means clustering algorithm,” J. R. Stat. Soc. Ser. C Appl. Stat., vol. 28, pp. 100–108, 1979. [23] Y.-C. Lin, J.-J. Kuo, W.-T. Chen, and J.-P. Sheu, “Reinforcement based communica- tion topology construction for decentralized learning with Non-IID data,” in Proc. IEEE GLOBECOM, 2021. [24] R. J. Gould, “Advances on the hamiltonian problem–a survey,” Graphs and Combina- torics, vol. 19, no. 1, pp. 7–52, 2003. [25] D. P. Williamson and D. B. Shmoys, The design of approximation algorithms. Cambridge university press, 2011. [26] R. Anderson et al., “Strong mixed-integer programming formulations for trained neural networks,” Math. Program., vol. 183, pp. 3–39, 2020. [27] Y. Shi et al., “Beyond IID: learning to combine non-IID metrics for vision tasks,” in Proc. AAAI, 2017 |