|
[1] F. Zhou, L. Liu, K. Zhang, G. Trajcevski, J. Wu, and T. Zhong, “Deeplink: A deep learning approach for user identity linkage,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2018. [2] T.-Y. Yang, C. G. Brinton, and C. Joe-Wong, “Predicting learner interactions in social learning networks,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2018. [3] Y. Bao, Y. Peng, and C. Wu, “Deep learning-based job placement in distributed machine learning clusters,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2019. [4] T. Ching, D. S. Himmelstein, B. K. Beaulieu-Jones, A. A. Kalinin, B. T. Do, G. P. Way, E. Ferrero, P.-M. Agapow, M. Zietz, M. M. Hoffman, et al., “Opportunities and obstacles for deep learning in biology and medicine,” Journal of The Royal Society Interface, vol. 15, no. 141, p. 20 170 387, 2018. [5] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), vol. 1, Jun. 2019, pp. 4171–4186. [6] A. M. Kaplan, “If you love something, let it go mobile: Mobile marketing and mobile social media 4x4,” Business horizons, vol. 55, pp. 129–139, 2012. [7] A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, and D. Ramage, “Federated learning for mobile keyboard prediction,” arXiv preprint arXiv:1811.03604, 2018. [8] T. Yang, G. Andrew, H. Eichner, H. Sun, W. Li, N. Kong, D. Ramage, and F. Beaufays, “Applied federated learning: Improving google keyboard query suggestions,” arXiv preprint arXiv:1812.02903, 2018. [9] S. Ramaswamy, R. Mathews, K. Rao, and F. Beaufays, “Federated learning for emoji prediction in a mobile keyboard,” arXiv preprint arXiv:1906.04329, 2019. [10] (2019). Cisco visual networking index: Global mobile data traffic forecast update. White paper. [11] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communicationefficient learning of deep networks from decentralized data,” in Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS), 2016. [12] T. Nishio and R. Yonetani, “Client selection for federated learning with heterogeneous resources in mobile edge,” in Proceedings of IEEE International Conference on Communications (ICC), 2019. [13] Y. Zhang and M. Van der Schaar, “Reputation-based incentive protocols in crowdsourcing applications,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2012. [14] S. Shi, X. Chu, and B. Li, “MG-WFBP: Efficient data communication for distributed synchronous SGD algorithms,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2019. [15] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey on mobile edge computing: The communication perspective,” IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2322–2358, 2017. [16] S.-H. Hsu, C.-H. Lin, C.-Y. Wang, and W.-T. Chen, “Breaking bandwidth limitation for mission-critical iot using semisequential multiple relays,” IEEE Internet of Things Journal, vol. 5, no. 5, pp. 3316–3329, 2017. [17] J. Koneˇcný, H. B. McMahan, F. X. Yu, P. Richtarik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” in Proceedings of Conference on Neural Information Processing Systems (NIPS) Workshop on Private Multi-Party Machine Learning, 2016. [18] K. Yang, T. Jiang, Y. Shi, and Z. Ding, “Federated learning based on over-the-air computation,” in Proceedings of IEEE International Conference on Communications (ICC), 2019, pp. 1–6. [19] ——, “Federated learning via over-the-air computation,” IEEE Transactions onWireless Communications, vol. 19, no. 3, pp. 2022–2035, 2020. [20] M. Mohri, G. Sivek, and A. T. Suresh, “Agnostic federated learning,” in Proceedings of International Conference on Machine Learning (ICML), 2019. [21] J. Wang and G. Joshi, “Cooperative SGD: A unified framework for the design and analysis of communication-efficient SGD algorithms,” arXiv preprint arXiv:1808.07576, 2018. [22] Y. Gao, Y. Chen, and K. R. Liu, “On cost-effective incentive mechanisms in microtask crowdsourcing,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 7, no. 1, pp. 3–15, 2014. [23] S. A. Kravchenko and F. Werner, “Parallel machine scheduling problems with a single server,” Mathematical and Computer Modelling, vol. 26, no. 12, pp. 1–11, 1997. [24] A. Krizhevsky, “Learning multiple layers of features from tiny images,” Citeseer, Tech. Rep., 2009. [25] P. Rost, A. Banchs, I. Berberana, M. Breitbach, M. Doll, H. Droste, C. Mannweiler, M. A. Puente, K. Samdanis, and B. Sayadi, “Mobile network architecture evolution toward 5G,” IEEE Communications Magazine, vol. 54, no. 5, pp. 84–91, 2016. [26] N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie, “Mobile edge computing: A survey,” IEEE Internet of Things Journal, vol. 5, no. 1, pp. 450–465, 2017. [27] P. Schulz, A. Wolf, G. P. Fettweis, A. M. Waswa, D. M. Soleymani, A. Mitschele-Thiel, T. Dudda, M. Dod, M. Rehme, J. Voigt, et al., “Network architectures for demanding 5g performance requirements: Tailored toward specific needs of efficiency and flexibility,” IEEE Vehicular Technology Magazine, vol. 14, no. 2, pp. 33–43, 2019. [28] X. Ran, H. Chen, X. Zhu, Z. Liu, and J. Chen, “DeepDecision: A mobile deep learning framework for edge video analytics,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2018. [29] J. Xu, L. Chen, and P. Zhou, “Joint service caching and task offloading for mobile edge computing in dense networks,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2018. [30] Y. He, J. Ren, G. Yu, and Y. Cai, “Joint computation offloading and resource allocation in D2D enabled mec networks,” in Proceedings of IEEE International Conference on Communications (ICC), 2019. [31] Y. Xiao and M. Krunz, “Qoe and power efficiency tradeoff for fog computing networks with fog node cooperation,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2017. [32] L. Tong and W. Gao, “Application-aware traffic scheduling for workload offloading in mobile clouds,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2016. [33] L. Huang, S. Bi, and Y. J. Zhang, “Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks,” IEEE Trans. Mob. Comput., 2019. [34] S. Wang et al., “Dynamic service placement for mobile micro-clouds with predicted future costs,” IEEE Trans. Parallel Distrib. Syst., vol. 28, pp. 1002–1016, 2016. [35] L. Wang, L. Jiao, J. Li, and M. Mühlhäuser, “Online resource allocation for arbitrary user mobility in distributed edge clouds,” in Proceedings of IEEE International Conference on Distributed Computing Systems (ICDCS), 2017. [36] L. Gu, D. Zeng,W. Li, S. Guo, A. Zomaya, and H. Jin, “Deep reinforcement learning based VNF management in geo-distributed edge computing,” in Proceedings of IEEE International Conference on Distributed Computing Systems (ICDCS), 2019. [37] J. Park, S. Samarakoon, M. Bennis, and M. Debbah, “Wireless network intelligence at the edge,” Proceedings of the IEEE, vol. 107, no. 11, pp. 2204–2239, 2019. [38] C. Zhang, H. Du, Q. Ye, C. Liu, and H. Yuan, “Dmra: A decentralized resource allocation scheme for multi-sp mobile edge computing,” in Proceedings of IEEE International Conference on Distributed Computing Systems (ICDCS), 2019. [39] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-iid data,” arXiv preprint arXiv:1806.00582, 2018. [40] C. Xie, K. Huang, P.-Y. Chen, and B. Li, “DBA: Distributed backdoor attacks against federated learning,” in Proceedings of International Conference on Learning Representations (ICLR), 2020. [41] Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan, “Can you really backdoor federated learning?” arXiv preprint arXiv:1911.07963, 2019. [42] K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for federated learning on user-held data,” in Proceedings of Conference on Neural Information Processing Systems (NIPS) Workshop on Private Multi-Party Machine Learning, 2016. [43] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. M. Kiddon, J. Koneˇcný, S. Mazzocchi, B. McMahan, T. V. Overveldt, D. Petrou, D. Ramage, and J. Roselander, “Towards federated learning at scale: System design,” in Proceedings of Conference on Systems and Machine Learning (SysML), 2019. [44] V. Smith, C.-K. Chiang, M. Sanjabi, and A. S. Talwalkar, “Federated multi-task learning,” in Proceedings of Conference on Neural Information Processing Systems (NIPS), 2017. [45] Q. Meng, W. Chen, Y. Wang, Z.-M. Ma, and T.-Y. Liu, “Convergence analysis of distributed stochastic gradient descent with shuffling,” Neurocomputing, vol. 337, pp. 46–57, 2019. [46] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proceedings of Conference on Neural Information Processing Systems (NIPS), 2014. [47] R. Yonetani, T. Takahashi, A. Hashimoto, and Y. Ushiku, “Decentralized learning of generative adversarial networks from multi-client non-iid data,” arXiv preprint arXiv:1905.09684, 2019. [48] R. Anil, G. Pereyra, A. T. Passos, R. Ormandi, G. Dahl, and G. Hinton, “Large scale distributed neural network training through online distillation,” in Proceedings of International Conference on Learning Representations (ICLR), 2018. [49] E. Jeong, S. Oh, H. Kim, J. Park, M. Bennis, and S.-L. Kim, “Communicationefficient on-device machine learning: Federated distillation and augmentation under non-iid private data,” arXiv preprint arXiv:1811.11479, 2018. [50] S.Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “When edge meets learning: Adaptive control for resource constrained distributed machine learning,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2018. [51] ——, “Adaptive federated learning in resource constrained edge computing systems,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1205–1221, 2019. [52] N. H. Tran, W. Bao, A. Zomaya, N. M. NH, and C. S. Hong, “Federated learning over wireless networks: Optimization model design and analysis,” in Proceedings of IEEE Conference on Computer Communications (INFOCOM), 2019. [53] L. P. Kaelbling, M. L. Littman, and A. W. Moore, “Reinforcement learning: A survey,” Journal of artificial intelligence research, vol. 4, pp. 237–285, 1996. [54] X.Wang, Y. Han, C.Wang, Q. Zhao, X. Chen, and M. Chen, “In-edge AI: Intelligentizing mobile edge computing, caching and communication by federated learning,” IEEE Network, vol. 33, no. 5, pp. 156–165, 2019. [55] D. P.Williamson and D. B. Shmoys, The Design of Approximation Algorithms. Cambridge University Press, 2011. [56] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al., “Tensorflow: A system for large-scale machine learning,” in Proceedings of USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2016. [57] F. Chollet et al., Keras, https://keras.io, 2015. [58] (2020). Google cloud storage pricing, [Online]. Available: https://cloud.google.com/storage/pricing#operations-pricing. [59] (2020). Coinmarketcap, [Online]. Available: https://coinmarketcap.com/currencies/datum/. |