|
Aggarwal, R., & Singh, N. (2023, February). An Approach to Learn Structural Similarity between Decision Trees Using Hungarian Algorithm. In Proceedings of 3rd International Conference on Recent Trends in Machine Learning, IoT, Smart Cities and Applications: ICMISC 2022 (pp. 185-199). Singapore: Springer Nature Singapore. Agrawal, R., & Srikant, R. (2000, May). Privacy-preserving data mining. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data (pp. 439-450). Alvarez-Melis, D., & Jaakkola, T. S. (2018). On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049. Arya, V., Bellamy, R. K., Chen, P. Y., Dhurandhar, A., Hind, M., Hoffman, S. C., ... & Zhang, Y. (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012. Bakirli, G., & Birant, D. (2017). DTreeSim: A new approach to compute decision tree similarity using re-mining. Turkish Journal of Electrical Engineering and Computer Sciences, 25(1), 108-125. 63 Bhosekar, A., & Ierapetritou, M. (2018). Advances in surrogate based modeling, feasibility analysis, and optimization: A review. Computers & Chemical Engineering, 108, 250-267. Bobek, S., Bałaga, P., & Nalepa, G. J. (2021, June). Towards Model-Agnostic Ensemble Explanations. In International Conference on Computational Science (pp. 39-51). Springer, Cham. Bogdanowicz, D., Giaro, K., & Wróbel, B. (2012). TreeCmp: comparison of trees in polynomial time. Evolutionary Bioinformatics, 8, EBO-S9657. Breiman, L., & Friedman, J. H. (1984). RA Olshen und CJ Stone. Classification and regression trees. Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big data & society, 3(1), 2053951715622512. Canete-Sifuentes, L., Monroy, R., & Medina-Perez, M. A. (2021). A review and experimental comparison of multivariate decision trees. IEEE Access, 9, 110451-110479. Choi, M. (2018). Medical cost personal datasets. Kaggle, Feb. Retrieved May 01, 2023 from https://www.kaggle.com/datasets/mirichoi0218/insurance. Giorgino, T. (2009). Computing and visualizing dynamic time warping alignments in R: the Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI magazine, 40(2), 44-58. Hancox-Li, L. (2020, January). Robustness in machine learning explanations: Does it matter?. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 640-647). Honegger, M. (2018). Shedding light on black box machine learning algorithms: Development of an axiomatic framework to assess the quality of methods that explain individual predictions. arXiv preprint arXiv:1808.05054. 64 Hu, X., Rudin, C., & Seltzer, M. (2019). Optimal sparse decision trees. Advances in Neural Information Processing Systems, 32. Islam, M. Z., & Brankovic, L. (2003). Noise addition for protecting privacy in data mining. In Engineering Mathematics and Applications Conference (pp. 85-90). Engineering Mathematics Group, ANZIAM. Islam, M. Z., Barnaghi, P. M., & Brankovic, L. (2003, December). Measuring data quality: Predictive accuracy vs. similarity of decision trees. In 6th International Conference on Computer & Information Technology (Vol. 2, pp. 457-462). Kamwa, I., Samantaray, S. R., & Joós, G. (2011). On the accuracy versus transparency trade-off of data-mining models for fast-response PMU-based catastrophe predictors. IEEE Transactions on Smart Grid, 3(1), 152-161. Lakkaraju, H., Bach, S. H., & Leskovec, J. (2016, August). Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1675-1684) Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J. (2017). Interpretable & explorable approximations of black box models. arXiv preprint arXiv:1707.01154. Last, M., Maimon, O., & Minkov, E. (2002). Improving stability of decision trees. International journal of pattern recognition and artificial intelligence, 16(02), 145-159. Lewis, R. J. (2000, May). An introduction to classification and regression tree (CART) analysis. In Annual meeting of the society for academic emergency medicine in San Francisco, California (Vol. 14). San Francisco, CA, USA: Department of Emergency Medicine Harbor-UCLA Medical Center Torrance. Liew, C. K., Choi, U. J., & Liew, C. J. (1985). A data distortion by probability distribution. ACM Transactions on Database Systems (TODS), 10(3), 395-411. 65 Liu, K., Kargupta, H., & Ryan, J. (2005). Random projection-based multiplicative data perturbation for privacy preserving distributed data mining. IEEE Transactions on knowledge and Data Engineering, 18(1), 92-106. Liu, K., Kargupta, H., & Ryan, J. (2005). Random projection-based multiplicative data perturbation for privacy preserving distributed data mining. IEEE Transactions on knowledge and Data Engineering, 18(1), 92-106. Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Miglio, R., & Soffritti, G. (2004). The comparison between classification trees through proximity measures. Computational statistics & data analysis, 45(3), 577-593. Mingers, J. (1989). An empirical comparison of selection measures for decision-tree induction. Machine learning, 3, 319-342. Mohseni, S., Zarei, N., & Ragan, E. D. (2021). A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS), 11(3-4), 1-45. Molnar, C. (2020). Interpretable machine learning. Lulu. com.p. Ntoutsi, I., Kalousis, A., & Theodoridis, Y. (2008, April). A general framework for estimating similarity of datasets and decision trees: exploring semantic similarity of decision trees. In Proceedings of the 2008 SIAM international conference on data mining (pp. 810-821). Society for Industrial and Applied Mathematics. Perner, P. (2013, March). How to compare and interpret two learnt Decision Trees from the same Domain?. In 2013 27th International Conference on Advanced Information Networking and Applications Workshops (pp. 318-322). IEEE. 66 Queipo, N. V., Haftka, R. T., Shyy, W., Goel, T., Vaidyanathan, R., & Tucker, P. K. (2005). Surrogate-based analysis and optimization. Progress in aerospace sciences, 41(1), 1-28. Rath, T. M., & Manmatha, R. (2003, June). Word image matching using dynamic time warping. In 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings. (Vol. 2, pp. II-II). IEEE. Safavian, S. R., & Landgrebe, D. (1991). A survey of decision tree classifier methodology. IEEE transactions on systems, man, and cybernetics, 21(3), 660-674. Sagi, O., & Rokach, L. (2020). Explainable decision forest: Transforming a decision forest into an interpretable tree. Information Fusion, 61, 124-138. Sakoe, H., & Chiba, S. (1978). Dynamic programming algorithm optimization for spoken word recognition. IEEE transactions on acoustics, speech, and signal processing, 26(1), 43-49. Sharma, H., & Kumar, S. (2016). A survey on decision tree algorithms of classification in data mining. International Journal of Science and Research (IJSR), 5(4), 2094-2097. Song, Y. Y., & Ying, L. U. (2015). Decision tree methods: applications for classification and prediction. Shanghai archives of psychiatry, 27(2), 130. Sundararajan, M., Taly, A., & Yan, Q. (2017, July). Axiomatic attribution for deep networks. In International conference on machine learning (pp. 3319-3328). PMLR. Tormene, P., Giorgino, T., Quaglini, S., & Stefanelli, M. (2009). Matching incomplete time series with dynamic time warping: an algorithm and an application to post-stroke rehabilitation. Artificial intelligence in medicine, 45(1), 11-34. Turney, P. (1995). Bias and the quantification of stability. Machine Learning, 20, 23-33. Vilone, G., & Longo, L. (2021). Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion, 76, 89-106. 67 w package. Journal of statistical Software, 31, 1-24. Weber, L., Lapuschkin, S., Binder, A., & Samek, W. (2022). Beyond explaining: Opportunities and challenges of XAI-based model improvement. Information Fusion. Weinberg, A. I., & Last, M. (2019). Selecting a representative decision tree from an ensemble of decision-tree models for fast big data classification. Journal of Big Data, 6(1), 1-17. Zhang, X., & Jiang, S. (2012). A Splitting Criteria Based on Similarity in Decision Tree Learning. J. Softw., 7(8), 1775-1782. |