|
[1] Fate - Federated AI Ecosystem. [2] Federated Deep Learning in PaddlePaddle. [3] TensorFlow Federated: Machine Learning on Decentralized Data. [4] Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H. B., Pa- tel, S., Ramage, D., Segal, A., and Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (New York, NY, USA, 2017), CCS ’17, Association for Computing Machinery, p. 1175‒ 1191. [5] Caldas, S., Duddu, S. M. K., Wu, P., Li, T., Konečný, J., McMahan, H. B., Smith, V., and Talwalkar, A. Leaf: A benchmark for federated settings, 2018. [6] Caldas, S., Konečny, J., McMahan, H. B., and Talwalkar, A. Expanding the reach of federated learning by reducing client resource requirements, 2018. [7] Chen, J., Pan, X., Monga, R., Bengio, S., and Jozefowicz, R. Revisiting dis- tributed synchronous sgd, 2016. [8] Chen,X.,Chen,T.,Sun,H.,Wu,Z.S.,andHong,M.Distributedtrainingwith heterogeneous data: Bridging median- and mean-based algorithms, 2019. [9] Chen,Y.,Sun,X.,andJin,Y.Communication-efficientfederateddeeplearning with asynchronous model update and temporally weighted aggregation, 2019. [10] Geyer, R. C., Klein, T., and Nabi, M. Differentially private federated learning: A client level perspective, 2017. [11] Ho, Q., Cipar, J., Cui, H., Kim, J. K., Lee, S., Gibbons, P. B., Gibson, G. A., Ganger, G. R., and Xing, E. P. More effective distributed ml via a stale syn- chronous parallel parameter server. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1 (Red Hook, NY, USA, 2013), NIPS’13, Curran Associates Inc., p. 1223‒1231. [12] Hu, C., Jiang, J., and Wang, Z. Decentralized federated learning: A segmented gossip approach, 2019. [13] Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., D’Oliveira, R. G. L., Rouayheb, S. E., Evans, D., Gardner, J., Garrett, Z., Gascón, A., Ghazi, B., Gibbons, P. B., Gruteser, M., Harchaoui, Z., He, C., He, L., Huo, Z., Hutchinson, B., Hsu, J., Jaggi, M., Javidi, T., Joshi, G., Khodak, M., Konečný, J., Korolova, A., Koushanfar, F., Koyejo, S., Lepoint, T., Liu, Y., Mittal, P., Mohri, M., Nock, R., Özgür, A., Pagh, R., Raykova, M., Qi, H., Ramage, D., Raskar, R., Song, D., Song, W., Stich, S. U., Sun, Z., Suresh, A. T., Tramèr, F., Vepakomma, P., Wang, J., Xiong, L., Xu, Z., Yang, Q., Yu, F. X., Yu, H., and Zhao, S. Advances and open problems in federated learning, 2019. [14] Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., and Bacon, D. Federated learning: Strategies for improving communication effi- ciency, 2016. [15] Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., and Smith, V. Federated optimization in heterogeneous networks, 2018. [16] Lin, Y., Han, S., Mao, H., Wang, Y., and Dally, W. J. Deep gradient compres- sion: Reducing the communication bandwidth for distributed training, 2017. [17] McMahan, H. B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. Communication-efficient learning of deep networks from decentralized data, 2016. [18] Phong, L. T., Aono, Y., Hayashi, T., Wang, L., and Moriai, S. Privacy- preserving deep learning via additively homomorphic encryption. IEEE Trans- actions on Information Forensics and Security 13, 5 (2018), 1333–1345. [19] Phong, L. T., Aono, Y., Hayashi, T., Wang, L., and Moriai, S. Privacy- preserving deep learning via additively homomorphic encryption. IEEE Trans- actions on Information Forensics and Security 13, 5 (2018), 1333–1345. [20] Pillutla,K.,Kakade,S.M.,andHarchaoui,Z.Robustaggregationforfederated learning, 2019. [21] Ryffel, T., Trask, A., Dahl, M., Wagner, B., Mancuso, J., Rueckert, D., and Passerat-Palmbach, J. A generic framework for privacy preserving deep learn- ing, 2018. [22] Sattler, F., Wiedemann, S., Müller, K.-R., and Samek, W. Robust and communication-efficient federated learning from non-iid data, 2019. [23] Shokri, R., and Shmatikov, V. Privacy-preserving deep learning. In 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton) (2015), pp. 909–910. [24] Smith, V., Chiang, C.-K., Sanjabi, M., and Talwalkar, A. Federated multi-task learning, 2017. [25] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. Jour- nal of Machine Learning Research 15, 56 (2014), 1929–1958. [26] WANG,L.,WANG,W.,andLI,B.Cmfl:Mitigatingcommunicationoverhead for federated learning. pp. 954–964. [27] Xie,C.,Koyejo,S.,andGupta,I.Asynchronousfederatedoptimization,2019. [28] Yuhong Wen, Wenqi Li, H. R., and Dogra, P. Federated Learning powered by NVIDIA Clara. [29] Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., and Chandra, V. Federated learning with non-iid data, 2018. [30] Zhu, L., Lu, Y., Lin, Y., and Han, S. Distributed training across the world, 2020. |