|
[1] A. Cohen, A. Davis, N. Vasilache, and A. Zinenko, “Structured ops in mlir: Compiling loops, Libraries and DSLs,” https://mlir.llvm.org/docs/Dialects/Vector/#positioning-in-the-codegen-infrastructure,2019, accessed: 2024-05-22. [2] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp.5–32, 2001. [3] J. H. Friedman, “Greedy function approximation: a gradient boosting machine,” Annals of statistics, pp. 1189–1232, 2001. [4] L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. [Online]. Available: https://doi.org/10.1023/A:1010933404324 [5] D. Tripathi, A. Shukla, B. Reddy et al., “Credit scoring models using ensemble learning and classification approaches: A comprehensive survey,” Wireless Personal Communications, vol. 123, pp. 785–812, 2022. [Online]. Available: https://doi.org/10.1007/s11277-021-09158-9 [6] Y. Wu, L. Zhang, U. A. Bhatti, and M. Huang, “Interpretable machine learning for personalized medical recommendations: A lime-based approach,” Diagnostics, vol. 13, no. 16, 2023. [Online]. Available:https://www.mdpi.com/2075-4418/13/16/2681 [7] A. Shankar, P. Perumal, M. Subramanian et al., “An intelligent recommendation system in e-commerce using ensemble learning,” Multimedia Tools and Applications, vol. 83, pp. 48 521–48 537, 2024. [Online]. Available: https://doi.org/10.1007/s11042-023-17415-1 [8] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, A. Müller, J. Nothman, G. Louppe, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and Édouard Duchesnay, “Scikit-learn: Machine learning in python,” 2018. [9] T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’16. New York, NY, USA: Association for Computing Machinery, 2016, p. 785–794. [Online]. Available: https://doi.org/10.1145/2939672.2939785 [10] C. Lattner, M. Amini, U. Bondhugula, A. Cohen, A. Davis, J. Pienaar, R. Riddle, T. Shpeisman, N. Vasilache, and O. Zinenko, “Mlir: Scaling compiler infrastructure for domain specific computation,” in 2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2021, pp. 2–14. [11] C. Lattner and V. Adve, “Llvm: A compilation framework for lifelong program analysis & transformation,” in International symposium on code generation and optimization, 2004. CGO 2004. IEEE, 2004, pp.75–86. [12] E. Tabanelli, G. Tagliavini, and L. Benini, “Optimizing random forest based inference on risc-v mcus at the extreme edge,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 41, no. 11, pp. 4516–4526, 2022. [13] C. Hakert, K.-H. Chen, and J.-J. Chen, “Flint: Exploiting floating point enabled integer arithmetic for efficient random forest inference,” 2022. [14] “IEEE standard for binary floating-point arithmetic,” ANSI/IEEE Std754-1985, pp. 1–20, 1985. [15] ISO/IEC JTC1/SC22/WG21, ISO/IEC 14882:2011 - Information Technology - Programming Languages - C++, 2011, https://isocpp.org/. [16] A. Prasad, S. Rajendra, K. Rajan, R. Govindarajan, and U. Bondhugula,“Treebeard: An optimizing compiler for decision tree based ml inference,” in IEEE/ACM International Symposium on Microarchitecture (MICRO), 2022, pp. 494–511. [17] “Intel machine learning benchmarks,” https://github.com/IntelPython/scikit-learn_bench, 2020. |