帳號:guest(18.119.122.206)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):潘婷蓁
作者(外文):Pan, Ting-Jhen
論文名稱(中文):應用改良式簡化群體演算法優化聯邦式學習模型之訓練
論文名稱(外文):Federated Learning Model Training using Simplified Swarm Optimization
指導教授(中文):葉維彰
指導教授(外文):Yeh, Wei-Chang
口試委員(中文):賴智明
謝宗融
梁韵嘉
口試委員(外文):Lai, Chyh-Ming
Hsieh, Tsung-Jung
Liang, Yun-Chia
學位類別:碩士
校院名稱:國立清華大學
系所名稱:工業工程與工程管理學系
學號:111034508
出版年(民國):113
畢業學年度:112
語文別:中文
論文頁數:89
中文關鍵詞:聯邦式學習改良式簡化群體演算法超參數調校萬用啟發式演算法
外文關鍵詞:Federated LearningHyperparameter TuningImproved Simplified Swarm OptimizationMeta-Heuristic Algorithm
相關次數:
  • 推薦推薦:0
  • 點閱點閱:13
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
近年來隨著網路的發展,資訊及資料的數量迅速膨脹,連帶機器學習的蒸蒸日上,資料所帶來價值也一併被提升,因此管理資料安全及隱私也備受關注,為了因應人工智慧安全性之相關議題,聯邦式學習將是這類問題之最佳解方,此學習架構能在不需要集中數據的情況下,以模型資訊傳遞代替資料傳遞,利用終端裝置進行訓練並在中央進行聚合來完成訓練,並已有多篇論文證實聯邦式學習是一種有效之訓練方法。
在聯邦式學習中有很多值得研究的議題,本研究針對客戶端(client)之訓練階段以及中央伺服器(server)的聚合階段兩部分進行優化,已有一些學者利用萬用啟發式演算法(Meta-Heuristic Algorithm)能有效避免落入局部最佳解之特點,包含粒子群演算法(Particle Swarm Optimization, PSO)及基因演算法(Genetic Algorithm, GA)等進行優化,因此本研究以改良式簡化群體演算法(Improved Simplified Swarm Optimization, iSSO)為基礎,提出更符合該問題之更新機制以及適應度函數來優化本地客戶之初始模型參數,也提出利用超參數調校的方式來客製化中央伺服器模型,同時節省通訊成本,也能更有效率的操作模型聚合和選取。透過其演算法之全局搜尋能力及在連續型問題下的優勢,來提升模型的準確度及收斂速度,為此領域帶來具有實際應用價值的方法及策略。
In recent years, with the rapid development of the internet, the amount of information and data has expanded exponentially. Concurrently, machine learning has gained prominence, and the value derived from data has been significantly enhanced. As a result, the security and privacy of data are also of great concern. To cope with the related issues, Federated Learning emerges as an optimal solution. The framework of this approach allows model information to be transmitted instead of data, eliminating the need for centralized data. Training is conducted on edge devices, and aggregation occurs centrally to complete the whole process. Several research papers have confirmed that the federated learning is an effective training method.
Within federated learning, there are numerous research topics to explore. This study focuses on optimizing both the training phase on client devices and the aggregation phase on the central server. Some scholars have successfully utilized the advantage of meta-heuristic algorithms to avoid falling into the local optima. These algorithms include Particle Swarm Optimization(PSO), Genetic Algorithm(GA), etc. In this study, we propose an algorithm based on the Improved Simplified Swarm Optimization(iSSO), which provides a more tailored update mechanism and fitness function for optimizing the initial weights of local clients. Additionally, we introduce the use of hyperparameter to customize the aggregation of server model. By doing so, we can not only adjust the level of communication cost reduction but also make the model aggregation and client selection more efficient. Leveraging the global search capabilities and advantages in continuous problems, our algorithm aims to boost model accuracy and convergence speed. Bring methods and strategies with practical application for this field.
中文摘要 i
Abstract ii
目錄 iii
圖目錄 vi
表目錄 viii
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究目的 4
1.3 研究架構 5
第二章 文獻回顧 7
2.1 聯邦式學習 Federated Learning 7
2.1.1 聯邦平均法 Federated Averaging 7
2.1.2 聯邦式學習之分類 9
2.1.3 聯邦式學習之議題—資料異構性 Heterogeneity 11
2.1.4 聯邦式學習之議題—隱私保護 Privacy Protection 11
2.1.5 聯邦式學習之議題—通訊開銷 Communication Cost 12
2.1.6 小結 13
2.2 聯邦式學習之優化 15
2.2.1 模型聚合 Model Aggregation 15
2.2.2 終端客戶選擇 Client Selection 15
2.3 卷積神經網路 Convolutional Neural Network 16
2.3.1 卷積神經網路 16
2.3.2 模型評估指標 17
2.3.3 於聯邦式學習之應用 17
2.4 萬用啟發式演算法 Meta-Heuristic Algorithm 18
2.4.1 萬用啟發式演算法 18
2.4.2 於聯邦式學習之應用 19
2.4.3 改良式簡化群體演算法(iSSO) 19
2.5 超參數調校 20
2.5.1 貝氏最佳化 21
2.5.2 Optuna套件(TPE-EI) 22
第三章 研究方法 24
3.1 客戶訓練階段 26
3.1.1 粒子編碼方式 Encoding Strategy 26
3.1.2 初始解 Initalization of Solution 28
3.1.3 更新機制 Update Mechanism 29
3.1.4 適應度函數 Fitness Function 30
3.2 模型聚合階段 32
3.3 方法架構及流程 33
第四章 實驗結果與分析 39
4.1 實驗環境 39
4.2 比較方法、資料集 39
4.3 參數、模型架構 41
4.4 FediSSO超參數調校 43
4.4.1 超參數優化方法 43
4.4.2 超參數搜尋結果 – MINST 資料集 44
4.4.3 超參數搜尋結果 - CIFAR10資料集 47
4.4.4 小結 50
4.5 實驗結果 51
4.5.1 模型評估 51
4.5.2 統計驗證 – MINST資料集 55
4.5.3 統計驗證 – CIFAR10資料集 64
4.6 通訊成本評估 71
4.6.1 傳輸的頻率 71
4.6.2 傳遞的數據量 72
第五章 結論與未來展望 75
5.1 結論 75
5.2 未來研究方向 76
參考文獻 77
[1] A. M. Turing, Computing machinery and intelligence. Springer, 2009.
[2] M. J. Shaw, C. Subramaniam, G. W. Tan, and M. E. Welge, "Knowledge management and data mining for marketing," Decision support systems, vol. 31, no. 1, pp. 127-137, 2001.
[3] M. Thomas, "The Future of AI: How Artificial Intelligence Will Change the World," Built In, vol. 10, 2022.
[4] 黃彥棻. "【2018關鍵趨勢10:GDPR】歐盟最嚴格個資法規範來了,想跟歐洲做生意都得遵守GDPR." https://www.ithome.com.tw/news/120026 (accessed 2022).
[5] 鴻海研究院. "馮霽:聯邦學習是目前隱私保護最重要也有效的技術." 知勢電子報. https://edge.aif.tw/federated-learning/ (accessed 2023).
[6] R. Johnson. "Secure and Compliant Data Using Embargoed, Confidential, and Private Data with Federated Learning." intel community. https://community.intel.com/t5/Blogs/Products-and-Solutions/HPC/Secure-and-Compliant-Data-Using-Embargoed-Confidential-and/post/1421771 (accessed 2023).
[7] S. Watanabe, "Tree-structured Parzen estimator: Understanding its algorithm components and their roles for better empirical performance," arXiv preprint arXiv:2304.11127, 2023.
[8] "為什麼會有數據孤島(Data Silos)破除數據孤島方式." 偉康科技洞察室. https://www.webcomm.com.tw/blog/data-silos/ (accessed 2023).
[9] A. Hard et al., "Federated learning for mobile keyboard prediction," arXiv preprint arXiv:1811.03604, 2018.
[10] "聯合學習醫療聯盟正式啟動." CIO雜誌. https://www.cio.com.tw/joint-learning-medical-alliance-officially-launched/ (accessed 2023).
[11] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, 2015: Springer, pp. 234-241.
[12] M. J. Sheller, G. A. Reina, B. Edwards, J. Martin, and S. Bakas, "Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation," in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part I 4, 2019: Springer, pp. 92-104.
[13] J. Wen, Z. Zhang, Y. Lan, Z. Cui, J. Cai, and W. Zhang, "A survey on federated learning: challenges and applications," International Journal of Machine Learning and Cybernetics, vol. 14, no. 2, pp. 513-535, 2023.
[14] M. A. Elfaki et al., "Metaheuristics Algorithm-Based Minimization of Communication Costs in Federated Learning," IEEE Access, 2023.
[15] B. M. Jakub Konečný, Daniel Ramage, "Federated Optimization: Distributed Optimization Beyond the Datacenter," 2015.
[16] H. B. McMahan, E. Moore, D. Ramage, and B. A. y Arcas, "Federated learning of deep networks using model averaging," arXiv preprint arXiv:1602.05629, vol. 2, p. 2, 2016.
[17] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, "Communication-efficient learning of deep networks from decentralized data," in Artificial intelligence and statistics, 2017: PMLR, pp. 1273-1282.
[18] T. Li, M. Sanjabi, A. Beirami, and V. Smith, "Fair resource allocation in federated learning," arXiv preprint arXiv:1905.10497, 2019.
[19] H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, and Y. Khazaeni, "Federated learning with matched averaging," arXiv preprint arXiv:2002.06440, 2020.
[20] Q. Yang, Y. Liu, T. Chen, and Y. Tong, "Federated machine learning: Concept and applications," ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1-19, 2019.
[21] Y. Cheng, Y. Liu, T. Chen, and Q. Yang, "Federated learning for privacy-preserving AI," Communications of the ACM, vol. 63, no. 12, pp. 33-36, 2020.
[22] S. Hardy et al., "Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption," arXiv preprint arXiv:1711.10677, 2017.
[23] P. Kairouz et al., "Advances and open problems in federated learning," Foundations and Trends® in Machine Learning, vol. 14, no. 1–2, pp. 1-210, 2021.
[24] Q. Li et al., "A survey on federated learning systems: Vision, hype and reality for data privacy and protection," IEEE Transactions on Knowledge and Data Engineering, 2021.
[25] Q. Li, Y. Diao, Q. Chen, and B. He, "Federated learning on non-iid data silos: An experimental study," in 2022 IEEE 38th International Conference on Data Engineering (ICDE), 2022: IEEE, pp. 965-978.
[26] D. Bogdanov, S. Laur, and J. Willemson, "Sharemind: A framework for fast privacy-preserving computations," in Computer Security-ESORICS 2008: 13th European Symposium on Research in Computer Security, Málaga, Spain, October 6-8, 2008. Proceedings 13, 2008: Springer, pp. 192-206.
[27] S. Zhang, A. E. Choromanska, and Y. LeCun, "Deep learning with elastic averaging SGD," Advances in neural information processing systems, vol. 28, 2015.
[28] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, "Federated optimization in heterogeneous networks," Proceedings of Machine learning and systems, vol. 2, pp. 429-450, 2020.
[29] X. Li, M. Jiang, X. Zhang, M. Kamp, and Q. Dou, "Fedbn: Federated learning on non-iid features via local batch normalization," arXiv preprint arXiv:2102.07623, 2021.
[30] X. Zhang, A. Fu, H. Wang, C. Zhou, and Z. Chen, "A privacy-preserving and verifiable federated learning scheme," in ICC 2020-2020 IEEE International Conference on Communications (ICC), 2020: IEEE, pp. 1-6.
[31] Q. Zhang, L. T. Yang, and Z. Chen, "Privacy preserving deep computation model on cloud for big data feature learning," IEEE Transactions on Computers, vol. 65, no. 5, pp. 1351-1362, 2015.
[32] M. Kim, Y. Song, S. Wang, Y. Xia, and X. Jiang, "Secure logistic regression based on homomorphic encryption: Design and evaluation," JMIR medical informatics, vol. 6, no. 2, p. e8805, 2018.
[33] R. C. Geyer, T. Klein, and M. Nabi, "Differentially private federated learning: A client level perspective," arXiv preprint arXiv:1712.07557, 2017.
[34] N. Kilbertus, A. Gascón, M. Kusner, M. Veale, K. Gummadi, and A. Weller, "Blind justice: Fairness with encrypted sensitive attributes," in International Conference on Machine Learning, 2018: PMLR, pp. 2630-2639.
[35] 仵冀颖. "如何解决联邦学习中的通信开销问题?." 机器之心. https://picture.iczhiku.com/weixin/message1610948840105.html (accessed 2023).
[36] J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, "Federated learning: Strategies for improving communication efficiency," arXiv preprint arXiv:1610.05492, 2016.
[37] S. Caldas, J. Konečny, H. B. McMahan, and A. Talwalkar, "Expanding the reach of federated learning by reducing client resource requirements," arXiv preprint arXiv:1812.07210, 2018.
[38] D. Rothchild et al., "Fetchsgd: Communication-efficient federated learning with sketching," in International Conference on Machine Learning, 2020: PMLR, pp. 8253-8265.
[39] A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, and R. Pedarsani, "Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization," in International Conference on Artificial Intelligence and Statistics, 2020: PMLR, pp. 2021-2031.
[40] J. Hamer, M. Mohri, and A. T. Suresh, "Fedboost: A communication-efficient algorithm for federated learning," in International Conference on Machine Learning, 2020: PMLR, pp. 3973-3983.
[41] G. Malinovskiy, D. Kovalev, E. Gasanov, L. Condat, and P. Richtarik, "From local SGD to local fixed-point methods for federated learning," in International Conference on Machine Learning, 2020: PMLR, pp. 6692-6701.
[42] W. Liu, L. Chen, Y. Chen, and W. Zhang, "Accelerating federated learning via momentum gradient descent," IEEE Transactions on Parallel and Distributed Systems, vol. 31, no. 8, pp. 1754-1766, 2020.
[43] X. Wu, Y. Zhang, M. Shi, P. Li, R. Li, and N. N. Xiong, "An adaptive federated learning scheme with differential privacy preserving," Future Generation Computer Systems, vol. 127, pp. 362-372, 2022.
[44] H. Wu and P. Wang, "Fast-convergent federated learning with adaptive weighting," IEEE Transactions on Cognitive Communications and Networking, vol. 7, no. 4, pp. 1078-1088, 2021.
[45] M. Hu, D. Wu, Y. Zhou, X. Chen, and M. Chen, "Incentive-aware autonomous client participation in federated learning," IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 10, pp. 2612-2627, 2022.
[46] S. Liu, G. Yu, R. Yin, J. Yuan, L. Shen, and C. Liu, "Joint model pruning and device selection for communication-efficient federated edge learning," IEEE Transactions on Communications, vol. 70, no. 1, pp. 231-244, 2021.
[47] W. Zhang et al., "Dynamic-fusion-based federated learning for COVID-19 detection," IEEE Internet of Things Journal, vol. 8, no. 21, pp. 15884-15891, 2021.
[48] L. Huang, Y. Yin, Z. Fu, S. Zhang, H. Deng, and D. Liu, "LoAdaBoost: Loss-based AdaBoost federated machine learning with reduced computational complexity on IID and non-IID intensive care data," Plos one, vol. 15, no. 4, p. e0230706, 2020.
[49] F. Lai, X. Zhu, H. V. Madhyastha, and M. Chowdhury, "Oort: Efficient federated learning via guided participant selection," in 15th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 21), 2021, pp. 19-35.
[50] T. Nishio and R. Yonetani, "Client selection for federated learning with heterogeneous resources in mobile edge," in ICC 2019-2019 IEEE international conference on communications (ICC), 2019: IEEE, pp. 1-7.
[51] Y. Deng et al., "AUCTION: Automated and quality-aware client selection framework for efficient federated learning," IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 8, pp. 1996-2009, 2021.
[52] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-444, 2015.
[53] R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, "Convolutional neural networks: an overview and application in radiology," Insights into imaging, vol. 9, pp. 611-629, 2018.
[54] A. Deshpande, "A beginner’s guide to understanding convolutional neural networks," Retrieved March, vol. 31, no. 2017, 2016.
[55] T. Huang. "卷積神經網路(Convolutional neural network, CNN) — CNN運算流程." https://chih-sheng-huang821.medium.com/卷積神經網路-convolutional-neural-network-cnn-cnn運算流程-ecaec240a631 (accessed 2023).
[56] A. Van Ooyen and B. Nienhuis, "Improving the convergence of the back-propagation algorithm," Neural networks, vol. 5, no. 3, pp. 465-471, 1992.
[57] B. Soltani, Y. Zhou, V. Haghighi, and J. Lui, "A Survey of Federated Evaluation in Federated Learning," arXiv preprint arXiv:2305.08070, 2023.
[58] Z. Chai et al., "Tifl: A tier-based federated learning system," in Proceedings of the 29th international symposium on high-performance parallel and distributed computing, 2020, pp. 125-136.
[59] L. Li et al., "FedSAE: A novel self-adaptive federated learning framework in heterogeneous systems," in 2021 International Joint Conference on Neural Networks (IJCNN), 2021: IEEE, pp. 1-10.
[60] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[61] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, vol. 25, 2012.
[62] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[63] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[64] C. He, M. Annavaram, and S. Avestimehr, "Group knowledge transfer: Federated learning of large cnns at the edge," Advances in Neural Information Processing Systems, vol. 33, pp. 14068-14080, 2020.
[65] Q. Nguyen, H. H. Pham, K.-S. Wong, P. L. Nguyen, T. T. Nguyen, and M. N. Do, "FedDCT: Federated Learning of Large Convolutional Neural Networks on Resource Constrained Devices using Divide and Co-Training," arXiv preprint arXiv:2211.10948, 2022.
[66] A. Lakhan, T.-M. Grønli, G. Muhammad, and P. Tiwari, "EDCNNS: Federated learning enabled evolutionary deep convolutional neural network for Alzheimer disease detection," Applied Soft Computing, vol. 147, p. 110804, 2023.
[67] M. Islam, M. T. Reza, M. Kaosar, and M. Z. Parvez, "Effectiveness of federated learning and CNN ensemble architectures for identifying brain tumors using MRI images," Neural Processing Letters, vol. 55, no. 4, pp. 3779-3809, 2023.
[68] X. Zhang, B. Zhang, W. Yu, and X. Kang, "Federated Deep Learning With Prototype Matching for Object Extraction From Very-High-Resolution Remote Sensing Images," IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1-16, 2023.
[69] Z. Beheshti and S. M. H. Shamsuddin, "A review of population-based meta-heuristic algorithms," Int. j. adv. soft comput. appl, vol. 5, no. 1, pp. 1-35, 2013.
[70] S. Voß, S. Martello, I. H. Osman, and C. Roucairol, "Meta-heuristics: Advances and trends in local search paradigms for optimization," 2012.
[71] K.-S. Tang, K.-F. Man, S. Kwong, and Q. He, "Genetic algorithms and their applications," IEEE signal processing magazine, vol. 13, no. 6, pp. 22-37, 1996.
[72] S. Kirkpatrick, C. D. Gelatt Jr, and M. P. Vecchi, "Optimization by simulated annealing," science, vol. 220, no. 4598, pp. 671-680, 1983.
[73] J. Kennedy and R. Eberhart, "Particle swarm optimization," in Proceedings of ICNN'95-international conference on neural networks, 1995, vol. 4: IEEE, pp. 1942-1948.
[74] Y. Shi and R. Eberhart, "A modified particle swarm optimizer," in 1998 IEEE international conference on evolutionary computation proceedings. IEEE world congress on computational intelligence (Cat. No. 98TH8360), 1998: IEEE, pp. 69-73.
[75] Z. Beheshti, S. M. Shamsuddin, and S. S. Yuhaniz, "Binary accelerated particle swarm algorithm (BAPSA) for discrete optimization problems," Journal of Global optimization, vol. 57, pp. 549-573, 2013.
[76] O. Cordón, S. Damas, and J. Santamaría, "A fast and accurate approach for 3D image registration using the scatter search evolutionary algorithm," Pattern Recognition Letters, vol. 27, no. 11, pp. 1191-1200, 2006.
[77] D.-S. Lu and C.-C. Chen, "Edge detection improvement by ant colony optimization," Pattern Recognition Letters, vol. 29, no. 4, pp. 416-425, 2008.
[78] A. K. Abasi, M. Aloqaily, B. Ouni, and M. Hamdi, "Optimization of CNN-based Federated Learning for Cyber-Physical Detection," in 2023 IEEE 20th Consumer Communications & Networking Conference (CCNC), 2023: IEEE, pp. 1-6.
[79] D. Połap and M. Woźniak, "A hybridization of distributed policy and heuristic augmentation for improving federated learning approach," Neural Networks, vol. 146, pp. 130-140, 2022.
[80] S. B. Guendouzi, S. Ouchani, and M. Malki, "Aggregation using Genetic Algorithms for Federated Learning in Industrial Cyber-Physical Systems," in 2022 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), 2022: IEEE, pp. 1-6.
[81] N. Victor, S. Bhattacharya, P. K. R. Maddikunta, F. M. Alotaibi, T. R. Gadekallu, and R. H. Jhaveri, "Fl-pso: A federated learning approach with particle swarm optimization for brain stroke prediction," in 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW), 2023: IEEE, pp. 33-38.
[82] D. Połap and M. Woźniak, "Meta-heuristic as manager in federated learning approaches for image processing purposes," Applied Soft Computing, vol. 113, p. 107872, 2021.
[83] S. Park, Y. Suh, and J. Lee, "FedPSO: Federated learning using particle swarm optimization to reduce communication costs," Sensors, vol. 21, no. 2, p. 600, 2021.
[84] A. K. Abasi, M. Aloqaily, and M. Guizani, "Grey wolf optimizer for reducing communication cost of federated learning," in GLOBECOM 2022-2022 IEEE Global Communications Conference, 2022: IEEE, pp. 1049-1054.
[85] W.-C. Yeh, "A two-stage discrete particle swarm optimization for the problem of multiple multi-level redundancy allocation in series systems," Expert Systems with Applications, vol. 36, no. 5, pp. 9192-9200, 2009.
[86] W.-C. Yeh, W.-W. Chang, and Y. Y. Chung, "A new hybrid approach for mining breast cancer pattern using discrete particle swarm optimization and statistical method," Expert Systems with Applications, vol. 36, no. 4, pp. 8204-8211, 2009.
[87] Y. K. Ever, "Using simplified swarm optimization on path planning for intelligent mobile robot," Procedia computer science, vol. 120, pp. 83-90, 2017.
[88] W.-C. Yeh, Y.-M. Yeh, C.-H. Chou, Y.-Y. Chung, and X. He, "A radio frequency identification network design methodology for the decision problem in Mackay Memorial Hospital based on swarm optimization," in 2012 IEEE Congress on Evolutionary Computation, 2012: IEEE, pp. 1-7.
[89] C.-L. Huang, "A particle-based simplified swarm optimization algorithm for reliability redundancy allocation problems," Reliability Engineering & System Safety, vol. 142, pp. 221-230, 2015.
[90] W.-C. Yeh, "Novel swarm optimization for mining classification rules on thyroid gland data," Information Sciences, vol. 197, pp. 65-76, 2012.
[91] W.-C. Yeh, "New parameter-free simplified swarm optimization for artificial neural network training and its application in the prediction of time series," IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 4, pp. 661-665, 2013.
[92] W.-C. Yeh, "An improved simplified swarm optimization," Knowledge-Based Systems, vol. 82, pp. 60-69, 2015.
[93] M. Masum et al., "Bayesian hyperparameter optimization for deep neural network-based network intrusion detection," in 2021 IEEE International Conference on Big Data (Big Data), 2021: IEEE, pp. 5413-5419.
[94] J. Bergstra and Y. Bengio, "Random search for hyper-parameter optimization," Journal of machine learning research, vol. 13, no. 2, 2012.
[95] J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl, "Algorithms for hyper-parameter optimization," Advances in neural information processing systems, vol. 24, 2011.
[96] P. K. Venkatesh, M. H. Cohen, R. W. Carr, and A. M. Dean, "Bayesian method for global optimization," Physical Review E, vol. 55, no. 5, p. 6219, 1997.
[97] P. Srinivas and R. Katarya, "hyOPTXg: OPTUNA hyper-parameter optimization framework for predicting cardiovascular disease using XGBoost," Biomedical Signal Processing and Control, vol. 73, p. 103456, 2022.
[98] H.-P. Nguyen, J. Liu, and E. Zio, "A long-term prediction approach based on long short-term memory neural networks with automatic parameter optimization by Tree-structured Parzen Estimator and applied to time-series data of NPP steam generators," Applied Soft Computing, vol. 89, p. 106116, 2020.
[99] J. Wu, X.-Y. Chen, H. Zhang, L.-D. Xiong, H. Lei, and S.-H. Deng, "Hyperparameter optimization for machine learning models based on Bayesian optimization," Journal of Electronic Science and Technology, vol. 17, no. 1, pp. 26-40, 2019.
[100] J. Bergstra, B. Komer, C. Eliasmith, D. Yamins, and D. D. Cox, "Hyperopt: a python library for model selection and hyperparameter optimization," Computational Science & Discovery, vol. 8, no. 1, p. 014008, 2015.
[101] T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama, "Optuna: A next-generation hyperparameter optimization framework," in Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 2623-2631.
[102] V. Dumoulin and F. Visin, "A guide to convolution arithmetic for deep learning," arXiv preprint arXiv:1603.07285, 2016.
[103] J. Nguyen, J. Wang, K. Malik, M. Sanjabi, and M. Rabbat, "Where to begin? on the impact of pre-training and initialization in federated learning," arXiv preprint arXiv:2210.08090, 2022.
[104] X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," in Proceedings of the thirteenth international conference on artificial intelligence and statistics, 2010: JMLR Workshop and Conference Proceedings, pp. 249-256.
[105] C. E. Shannon, "A mathematical theory of communication," ACM SIGMOBILE mobile computing and communications review, vol. 5, no. 1, pp. 3-55, 2001.
[106] Y. C. LeCun, Corinna; Burges, Christopher C.J. "The MNIST Handwritten Digit Database." http://yann.lecun.com/exdb/mnist/ (accessed.
[107] A. Krizhevsky and G. Hinton, "Learning multiple layers of features from tiny images," 2009.
[108] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: a simple way to prevent neural networks from overfitting," The journal of machine learning research, vol. 15, no. 1, pp. 1929-1958, 2014.
[109] S. Palachy, "Understanding the scaling of L² regularization in the context of neural networks," Towards Data Science, 2019.
[110] O. Contributors. "optuna.samplers." https://optuna.readthedocs.io/en/stable/reference/samplers/index.html (accessed 2024).
[111] E. H. Houssein and A. Sayed, "Boosted federated learning based on improved Particle Swarm Optimization for healthcare IoT devices," Computers in Biology and Medicine, vol. 163, p. 107195, 2023.
[112] R. Lai and X. Pu, "Improving Communication Efficiency in Federated Learning via Metaheuristic-Based Learning Algorithms," in ICMD: International Conference on Mechanical Design, 2023: Springer, pp. 1911-1921.
[113] R. B. D’Agostino, "Tests for the normal distribution," in Goodness-of-fit-techniques: Routledge, 2017, pp. 367-420.
[114] H. Arsham and M. Lovric, "Bartlett's Test," International encyclopedia of statistical science, vol. 2, pp. 20-23, 2011.
[115] G. C. Fernandez, "Residual analysis and data transformations: important tools in statistical analysis," HortScience, vol. 27, no. 4, pp. 297-300, 1992.
[116] H. Liu, Comparing Welch ANOVA, a Kruskal-Wallis test, and traditional ANOVA in case of heterogeneity of variance. Virginia Commonwealth University, 2015.
[117] P. Foret, A. Kleiner, H. Mobahi, and B. Neyshabur, "Sharpness-aware minimization for efficiently improving generalization," arXiv preprint arXiv:2010.01412, 2020.
[118] D. Caldarola, B. Caputo, and M. Ciccone, "Improving generalization in federated learning by seeking flat minima," in European Conference on Computer Vision, 2022: Springer, pp. 654-672.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *