|
[1] V. Pérez-Rosas, M. Abouelenien, R. Mihalcea, and M. Burzo, “Deception detection using real-life trial data,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 59–66, 2015.
[2] L. Cui and D. Lee, “Coaid: Covid-19 healthcare misinformation dataset,” arXiv preprint arXiv:2006.00885, 2020.
[3] D. Bloch, J.-M. Fournier, D. Gonçalves, and Á. Pina, “Trends in public finance: Insights from a new detailed dataset,” 2016.
[4] X. Sun, P. Zhang, J. K. Liu, J. Yu, and W. Xie, “Private machine learning classification based on fully homomorphic encryption,” IEEE Transactions on Emerging Topics in Computing, vol. 8, no. 2, pp. 352–364, 2018.
[5] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC con- ference on computer and communications security, pp. 308–318, 2016.
[6] J. Konečný, H. B. McMahan, F. X. Yu, P. Richtarik, A. T. Suresh, and D. Bacon, “Fed- erated learning: Strategies for improving communication efficiency,” in NIPS Workshop on Private Multi-Party Machine Learning, 2016.
[7] L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in 2019 IEEE Symposium on Security and Privacy (SP), pp. 691–706, IEEE, 2019.
[8] C. Fung, C. J. Yoon, and I. Beschastnikh, “Mitigating sybils in federated learning poi- soning,” arXiv preprint arXiv:1808.04866, 2018.
[9] B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep models under the gan: information leak- age from collaborative deep learning,” in Proceedings of the 2017 ACM SIGSAC Con- ference on Computer and Communications Security, pp. 603–618, 2017.
[10] Y. Lu, X. Huang, Y. Dai, S. Maharjan, and Y. Zhang, “Blockchain and federated learning for privacy-preserved data sharing in industrial iot,” IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 4177–4186, 2020.
[11] Y. Li, C. Chen, N. Liu, H. Huang, Z. Zheng, and Q. Yan, “A blockchain-based decentral- ized federated learning framework with committee consensus,” IEEE Network, vol. 35, no. 1, pp. 234–241, 2021.
[12] D. C. Nguyen, M. Ding, Q.-V. Pham, P. N. Pathirana, L. B. Le, A. Seneviratne, J. Li, D. Niyato, and H. V. Poor, “Federated learning meets blockchain in edge computing: Op- portunities and challenges,” IEEE Internet of Things Journal, vol. 8, no. 16, pp. 12806– 12825, 2021. [13] C. Papageorgiou and T. Poggio, “A trainable system for object detection,” International journal of computer vision, vol. 38, no. 1, pp. 15–33, 2000. [14] A. I. Khan and S. Al-Habsi, “Machine learning in computer vision,” Procedia Computer Science, vol. 167, pp. 1444–1451, 2020. [15] L. Deng, G. Hinton, and B. Kingsbury, “New types of deep neural network learning for speech recognition and related applications: An overview,” in 2013 IEEE international conference on acoustics, speech and signal processing, pp. 8599–8603, IEEE, 2013. [16] K. Chowdhary, “Natural language processing,” Fundamentals of artificial intelligence, pp. 603–649, 2020. [17] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [18] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255, Ieee, 2009. [19] X. Yu, T. Liu, X. Wang, and D. Tao, “On compressing deep models by low rank and sparse decomposition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7370–7379, 2017. [20] M. Zhu and S. Gupta, “To prune, or not to prune: exploring the efficacy of pruning for model compression,” arXiv preprint arXiv:1710.01878, 2017. [21] Y. Xu, Y. Wang, A. Zhou, W. Lin, and H. Xiong, “Deep neural network compression with single and multiple level quantization,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, 2018. [22] P. Ren, Y. Xiao, X. Chang, P.-Y. Huang, Z. Li, X. Chen, and X. Wang, “A comprehen- sive survey of neural architecture search: Challenges and solutions,” ACM Computing Surveys (CSUR), vol. 54, no. 4, pp. 1–34, 2021. [23] Y. Kim and A. M. Rush, “Sequence-level knowledge distillation,” arXiv preprint arXiv:1606.07947, 2016. [24] M. Denil, B. Shakibi, L. Dinh, M. Ranzato, and N. De Freitas, “Predicting parameters in deep learning,” Advances in neural information processing systems, vol. 26, 2013. [25] Y. Lu, A. Kumar, S. Zhai, Y. Cheng, T. Javidi, and R. Feris, “Fully-adaptive feature shar- ing in multi-task networks with applications in person attribute classification,” in Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pp. 5334– 5343, 2017. [26] S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for effi- cient neural network,” Advances in neural information processing systems, vol. 28, 2015.
[27] M. Courbariaux, Y. Bengio, and J.-P. David, “Binaryconnect: Training deep neural net- works with binary weights during propagations,” Advances in neural information pro- cessing systems, vol. 28, 2015. [28] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “Xnor-net: Imagenet classifica- tion using binary convolutional neural networks,” in European conference on computer vision, pp. 525–542, Springer, 2016. [29] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, “Quantized neural networks: Training neural networks with low precision weights and activations,” The Journal of Machine Learning Research, vol. 18, no. 1, pp. 6869–6898, 2017. [30] C. Zhu, S. Han, H. Mao, and W. J. Dally, “Trained ternary quantization,” arXiv preprint arXiv:1612.01064, 2016. [31] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neu- ral networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015. [32] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size,” arXiv preprint arXiv:1602.07360, 2016. [33] A. Mishra, E. Nurvitadhi, J. J. Cook, and D. Marr, “Wrpn: Wide reduced-precision net- works,” arXiv preprint arXiv:1709.01134, 2017. [34] T. Elsken, J. H. Metzen, and F. Hutter, “Neural architecture search: A survey,” The Jour- nal of Machine Learning Research, vol. 20, no. 1, pp. 1997–2017, 2019. [35] L.-C. Chen, M. Collins, Y. Zhu, G. Papandreou, B. Zoph, F. Schroff, H. Adam, and J. Shlens, “Searching for efficient multi-scale architectures for dense image prediction,” Advances in neural information processing systems, vol. 31, 2018. [36] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le, “Aging evolution for image classifier architecture search,” in AAAI conference on artificial intelligence, vol. 2, p. 2, 2019. [37] Z. Zhong, Z. Yang, B. Deng, J. Yan, W. Wu, J. Shao, and C.-L. Liu, “Blockqnn: Effi- cient block-wise neural network architecture generation,” IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 7, pp. 2314–2328, 2020. [38] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network (2015),” arXiv preprint arXiv:1503.02531, vol. 2, 2015. [39] J. M. Joyce, “Kullback-leibler divergence,” in International encyclopedia of statistical science, pp. 720–722, Springer, 2011. [40] Z. Zhang and M. Sabuncu, “Generalized cross entropy loss for training deep neural net- works with noisy labels,” Advances in neural information processing systems, vol. 31, 2018. [41] W. Park, D. Kim, Y. Lu, and M. Cho, “Relational knowledge distillation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3967– 3976, 2019.
[42] A. Malinin, B. Mlodozeniec, and M. Gales, “Ensemble distribution distillation,” arXiv preprint arXiv:1905.00076, 2019. [43] S. Hegde, R. Prasad, R. Hebbalaguppe, and V. Kumar, “Variational student: Learn- ing compact and sparser networks in knowledge distillation framework,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3247–3251, IEEE, 2020. [44] F. Cicalese, E. Laber, M. Molinaro, et al., “Teaching with limited information on the learner's behaviour,” in International Conference on Machine Learning, pp. 2016–2026, PMLR, 2020. [45] M. Ji, B. Heo, and S. Park, “Show, attend and distill: Knowledge distillation via attention-based feature matching,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 7945–7952, 2021. [46] J.-T. Yang, G.-M. Liu, and S. C.-H. Huang, “Emotion transformation feature: Novel feature for deception detection in videos,” in 2020 IEEE International Conference on Image Processing (ICIP), pp. 1726–1730, 2020. [47] J.-T. Yang, G.-M. Liu, and S. C.-H. Huang, “Multimodal deception detection in videos via analyzing emotional state-based feature,” arXiv preprint arXiv:2104.08373, 2021. [48] R. Mihalcea and M. Burzo, “Towards multimodal deception detection–step 1: building a collection of deceptive videos,” in Proceedings of the 14th ACM international conference on Multimodal interaction, pp. 189–192, 2012. [49] V. L. Rubin, Y. Chen, and N. K. Conroy, “Deception detection for news: three types of fakes,” Proceedings of the Association for Information Science and Technology, vol. 52, no. 1, pp. 1–4, 2015. [50] N. R. Council et al., The polygraph and lie detection. National Academies Press, 2003. [51] J. R. Simpson, “Functional mri lie detection: too good to be true?,” Journal of the Amer- ican Academy of Psychiatry and the law online, vol. 36, no. 4, pp. 491–498, 2008. [52] M. J. Farah, J. B. Hutchinson, E. A. Phelps, and A. D. Wagner, “Functional mri-based lie detection: scientific and societal challenges,” Nature Reviews Neuroscience, vol. 15, no. 2, pp. 123–131, 2014. [53] N. Michael, M. Dilsizian, D. Metaxas, and J. K. Burgoon, “Motion profiles for deception detection using visual cues,” in European Conference on Computer Vision, pp. 462–475, Springer, 2010. [54] M. Jaiswal, S. Tabibu, and R. Bajpai, “The truth and nothing but the truth: Multimodal analysis for deception detection,” in 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW), pp. 938–943, IEEE, 2016. [55] F. A. Al-Simadi, “Detection of deceptive behavior: A cross-cultural test,” Social Behav- ior and Personality: an international journal, vol. 28, no. 5, pp. 455–461, 2000. [56] Z. Wu, B. Singh, L. Davis, and V. Subrahmanian, “Deception detection in videos,” in Proceedings of the AAAI conference on artificial intelligence, vol. 32, 2018.
[57] M. Gogate, A. Adeel, and A. Hussain, “Deep learning driven multimodal fusion for au- tomated deception detection,” in 2017 IEEE symposium series on computational intelli- gence (SSCI), pp. 1–6, IEEE, 2017. [58] G. Krishnamurthy, N. Majumder, S. Poria, and E. Cambria, “A deep learning approach for multimodal deception detection,” arXiv preprint arXiv:1803.00344, 2018. [59] H. Karimi, J. Tang, and Y. Li, “Toward end-to-end deception detection in videos,” in 2018 IEEE International Conference on Big Data (Big Data), pp. 1278–1283, IEEE, 2018. [60] A. Vrij, Detecting lies and deceit: The psychology of lying and implications for profes- sional practice. Wiley, 2000. [61] P. Buddharaju, J. Dowdall, P. Tsiamyrtzis, D. Shastri, I. Pavlidis, and M. G. Frank, “Au- tomatic thermal monitoring system (athemos) for deception detection,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 2, pp. 1179–vol, IEEE, 2005. [62] D. D. Langleben and J. C. Moriarty, “Using brain imaging for lie detection: Where sci- ence, law, and policy collide.,” Psychology, Public Policy, and Law, vol. 19, no. 2, p. 222, 2013. [63] G. Ganis, J. P. Rosenfeld, J. Meixner, R. A. Kievit, and H. E. Schendan, “Lying in the scanner: covert countermeasures disrupt deception detection by functional magnetic res- onance imaging,” Neuroimage, vol. 55, no. 1, pp. 312–319, 2011. [64] S. Lu, G. Tsechpenakis, D. N. Metaxas, M. L. Jensen, and J. Kruse, “Blob analysis of the head and hands: A method for deception detection,” in Proceedings of the 38th Annual Hawaii International Conference on System Sciences, pp. 20c–20c, IEEE, 2005. [65] P. Ekman, Telling lies: Clues to deceit in the marketplace, politics, and marriage (revised edition). WW Norton & Company, 2009. [66] Z. Zhang, V. Singh, T. E. Slowe, S. Tulyakov, and V. Govindaraju, “Real-time auto- matic deceit detection from involuntary facial expressions,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–6, IEEE, 2007. [67] P. Ekman, W. V. Freisen, and S. Ancoli, “Facial signs of emotional experience.,” Journal of personality and social psychology, vol. 39, no. 6, p. 1125, 1980. [68] T. Qin, J. K. Burgoon, J. P. Blair, and J. F. Nunamaker, “Modality effects in deception detection and applications in automatic-deception-detection,” in Proceedings of the 38th annual Hawaii international conference on system sciences, pp. 23b–23b, IEEE, 2005. [69] J. Li, Y. Wang, C. Wang, Y. Tai, J. Qian, J. Yang, C. Wang, J. Li, and F. Huang, “Dsfd: dual shot face detector,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5060–5069, 2019. [70] D. Meng, X. Peng, K. Wang, and Y. Qiao, “Frame attention networks for facial expres- sion recognition in videos,” in 2019 IEEE international conference on image processing (ICIP), pp. 3866–3870, IEEE, 2019.
[71] M.-I. Georgescu, R. T. Ionescu, and M. Popescu, “Local learning with deep and hand- crafted features for facial expression recognition,” IEEE Access, vol. 7, pp. 64827– 64836, 2019. [72] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression,” in 2010 ieee computer society conference on computer vision and pattern recognition-workshops, pp. 94–101, IEEE, 2010. [73] I. J. Goodfellow, D. Erhan, P. L. Carrier, A. Courville, M. Mirza, B. Hamner, W. Cukier- ski, Y. Tang, D. Thaler, D.-H. Lee, et al., “Challenges in representation learning: A report on three machine learning contests,” in International conference on neural information processing, pp. 117–124, Springer, 2013. [74] L. Kerkeni, Y. Serrestou, M. Mbarki, K. Raoof, M. A. Mahjoub, and C. Cleder, “Auto- matic speech emotion recognition using machine learning,” in Social media and machine learning, IntechOpen, 2019. [75] F. Eyben, M. Wöllmer, and B. Schuller, “Opensmile: the munich versatile and fast open- source audio feature extractor,” in Proceedings of the 18th ACM international conference on Multimedia, pp. 1459–1462, 2010. [76] B. Barras, “Sox: Sound exchange,” tech. rep., 2012.
[77] J.-T. Yang, S.-C. Kao, and S. C.-H. Huang, “Knowledge distillation with representative teacher keys based on attention mechanism for image classification model compression,” arXiv preprint arXiv:2206.12788, 2022. [78] X. Lin, C. Zhao, and W. Pan, “Towards accurate binary convolutional neural network,” Advances in neural information processing systems, vol. 30, 2017.
[79] J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distillation: A survey,” Interna- tional Journal of Computer Vision, vol. 129, no. 6, pp. 1789–1819, 2021. [80] G. Chen, W. Choi, X. Yu, T. Han, and M. Chandraker, “Learning efficient object de- tection models with knowledge distillation,” Advances in neural information processing systems, vol. 30, 2017. [81] Z. Meng, J. Li, Y. Zhao, and Y. Gong, “Conditional teacher-student learning,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6445–6449, IEEE, 2019. [82] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “Fitnets: Hints for thin deep nets,” arXiv preprint arXiv:1412.6550, 2014. [83] J. Kim, S. Park, and N. Kwak, “Paraphrasing complex network: Network compression via factor transfer,” Advances in neural information processing systems, vol. 31, 2018. [84] J. Yim, D. Joo, J. Bae, and J. Kim, “A gift from knowledge distillation: Fast optimization, network minimization and transfer learning,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4133–4141, 2017.
[85] H. Chen, Y. Wang, C. Xu, C. Xu, and D. Tao, “Learning student networks via feature embedding,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 1, pp. 25–35, 2020. [86] T. Li, J. Li, Z. Liu, and C. Zhang, “Few sample knowledge distillation for efficient net- work compression,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14639–14647, 2020. [87] C. Yang, L. Xie, C. Su, and A. L. Yuille, “Snapshot distillation: Teacher-student opti- mization in one generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2859–2868, 2019. [88] L. Sun, J. Gou, B. Yu, L. Du, and D. Tao, “Collaborative teacher-student learning via multiple knowledge transfer,” arXiv preprint arXiv:2101.08471, 2021. [89] J. Ma and Q. Mei, “Graph representation learning via multi-task knowledge distillation,” arXiv preprint arXiv:1911.05700, 2019. [90] E. J. Crowley, G. Gray, and A. J. Storkey, “Moonshine: Distilling with cheap convolu- tions,” Advances in Neural Information Processing Systems, vol. 31, 2018. [91] S. Srinivas and F. Fleuret, “Knowledge transfer with jacobian matching,” in International Conference on Machine Learning, pp. 4723–4731, PMLR, 2018. [92] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” 2011. [93] A. Krizhevsky, V. Nair, and G. Hinton, “Cifar-10 (canadian institute for advanced re- search),” URL http://www. cs. toronto. edu/kriz/cifar. html, vol. 5, no. 4, p. 1, 2010. [94] L. N. Darlow, E. J. Crowley, A. Antoniou, and A. J. Storkey, “Cinic-10 is not imagenet or cifar-10,” arXiv preprint arXiv:1810.03505, 2018. [95] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. [96] S. Zagoruyko and N. Komodakis, “Wide residual networks,” arXiv preprint arXiv:1605.07146, 2016. [97] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, vol. 30, 2017. [98] G. Tang, M. Müller, A. Rios, and R. Sennrich, “Why self-attention? a targeted evaluation of neural machine translation architectures,” arXiv preprint arXiv:1808.08946, 2018. [99] D. A. Hudson and C. D. Manning, “Compositional attention networks for machine rea- soning,” arXiv preprint arXiv:1803.03067, 2018. [100] H. Fukui, T. Hirakawa, T. Yamashita, and H. Fujiyoshi, “Attention branch net- work: Learning of attention mechanism for visual explanation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10705–10714, 2019.
[101] X. Yang, “An overview of the attention mechanisms in computer vision,” in Journal of Physics: Conference Series, vol. 1693, p. 012173, IOP Publishing, 2020. [102] J. Nagi, F. Ducatelle, G. A. Di Caro, D. Cireşan, U. Meier, A. Giusti, F. Nagi, J. Schmid- huber, and L. M. Gambardella, “Max-pooling convolutional neural networks for vision- based hand gesture recognition,” in 2011 IEEE international conference on signal and image processing applications (ICSIPA), pp. 342–347, IEEE, 2011. [103] D. Yu, H. Wang, P. Chen, and Z. Wei, “Mixed pooling for convolutional neural net- works,” in International conference on rough sets and knowledge technology, pp. 364– 375, Springer, 2014. [104] A. Stergiou, R. Poppe, and G. Kalliatakis, “Refining activation downsampling with soft- pool,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10357–10366, 2021. [105] S. Zagoruyko and N. Komodakis, “Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer,” arXiv preprint arXiv:1612.03928, 2016. [106] Y. Tian, D. Krishnan, and P. Isola, “Contrastive representation distillation,” arXiv preprint arXiv:1910.10699, 2019. [107] J.-T. Yang, W.-Y. Chen, C.-H. Li, S.-C. Kao, S. C.-H. Huang, and H.-C. Wu, “A consor- tium blockchain-based personalized federated learning framework with adaptive thresh- old and attention-based knowledge distillation.” in submission, 2022. [108] H.-N. Dai, Z. Zheng, and Y. Zhang, “Blockchain for internet of things: A survey,” IEEE Internet of Things Journal, vol. 6, no. 5, pp. 8076–8094, 2019. [109] J. Lin, W. Yu, N. Zhang, X. Yang, H. Zhang, and W. Zhao, “A survey on internet of things: Architecture, enabling technologies, security and privacy, and applications,” IEEE Internet of Things Journal, vol. 4, no. 5, pp. 1125–1142, 2017. [110] J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Fed- erated learning: Strategies for improving communication efficiency,” arXiv preprint arXiv:1610.05492, 2016. [111] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and ap- plications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019. [112] S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, “Scaffold: Stochastic controlled averaging for federated learning,” in International Conference on Machine Learning, pp. 5132–5143, PMLR, 2020. [113] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated op- timization in heterogeneous networks,” Proceedings of Machine Learning and Systems, vol. 2, pp. 429–450, 2020. [114] X. Li, M. Jiang, X. Zhang, M. Kamp, and Q. Dou, “Fedbn: Federated learning on non-iid features via local batch normalization,” arXiv preprint arXiv:2102.07623, 2021.
[115] W. Lu, J. Wang, Y. Chen, X. Qin, R. Xu, D. Dimitriadis, and T. Qin, “Personalized federated learning with adaptive batchnorm for healthcare,” IEEE Transactions on Big Data, 2022. [116] Y. Chen, W. Lu, X. Qin, J. Wang, and X. Xie, “Metafed: Federated learning among fed- erations with cyclic knowledge distillation for personalized healthcare,” arXiv preprint arXiv:2206.08516, 2022. [117] “Bitcoin,” 2009. Available: https://bitcoin.org/en/. [118] “Ethereum,” 2015. Available: https://ethereum.org/en/. [119] S. Porru, A. Pinna, M. Marchesi, and R. Tonelli, “Blockchain-oriented software engi- neering: Challenges and new directions,” in Proceedings of the 2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C), pp. 169–171, 2017. [120] D. Guegan, “Public blockchain versus private blockhain,” halshs.archives- ouvertes.fr/halshs-01524440, 2017. [121] “Quorum,” 2016. Available: https://consensys.net/quorum/. [122] X. Xie, Q. Zheng, Z. Guo, Q. Wang, and X. Li, “Design and research on b2b trading platform based on consortium blockchains,” in Proceedings of the 2018 International Conference on Cloud Computing and Security (ICCCS), pp. 436–447, 2018. [123] The Linux Foundation, HYPERLEDGER FABRIC, 2015. Available: https://www. hyperledger.org/use/fabric. [124] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” 2008. Avalible: https: //bitcoin.org/bitcoin.pdf. [125] D. Ongaro and J. Ousterhout, “In search of an understandable consensus algorithm,” in 2014 USENIX Annual Technical Conference (Usenix ATC 14), pp. 305–319, 2014. [126] J. Chen, “Know your client (kyc),” 2021. Available: https://www.investopedia.com/ terms/k/knowyourclient.asp. [127] P. Bilic, P. F. Christ, E. Vorontsov, G. Chlebus, H. Chen, Q. Dou, C.-W. Fu, X. Han, P.-A. Heng, J. Hesser, et al., “The liver tumor segmentation benchmark (lits),” arXiv preprint arXiv:1901.04056, 2019. [128] X. Xu, F. Zhou, B. Liu, D. Fu, and X. Bai, “Efficient multiple organ localization in ct image using 3d region proposal network,” IEEE transactions on medical imaging, vol. 38, no. 8, pp. 1885–1898, 2019. [129] A. Reiss and D. Stricker, “Introducing a new benchmarked dataset for activity monitor- ing,” in 2012 16th international symposium on wearable computers, pp. 108–109, IEEE, 2012. [130] J. Yang, R. Shi, D. Wei, Z. Liu, L. Zhao, B. Ke, H. Pfister, and B. Ni, “Medmnist v2: A large-scale lightweight benchmark for 2d and 3d biomedical image classification,” arXiv preprint arXiv:2110.14795, 2021.
[131] M. Yurochkin, M. Agarwal, S. Ghosh, K. Greenewald, N. Hoang, and Y. Khazaeni, “Bayesian nonparametric federated learning of neural networks,” in International Con- ference on Machine Learning, pp. 7252–7261, PMLR, 2019. [132] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
|