|
[1] Su, C.T. and Hsiao, Y.H. (2007), “An evaluation of the robustness of MTS for imbalanced data,” IEEE Transactions on Knowledge and Data Engineering, Vol. 19, No. 10, pp. 1321-1332. [2] Su, C.T. and Hsiao, Y.H. (2009), “Multiclass MTS for simultaneous feature selection and classification,” IEEE Transactions on Knowledge and Data Engineering, Vol. 21, No. 2, pp. 192-205. [3] Su, C.T. (2013), “Quality Engineering: Off-line methods and applications,” CRC Press. [4] Woodall, W.H., Koudelik, R., Tsui, K.L., Kim, S.B., Stoumbos, Z.G. and Carvounis, C.P. (2003), “A review and analysis of the Mahalanobis-Taguchi system,” Technometrics, Vol. 45, No. 1, pp. 1-15. [5] Breiman, L. (1996). “Out-of-bag estimation,” Technical report, Statistics Department, University of California Berkeley, Berkeley CA 94708, 1996b. Vol. 33, No. 34, pp. 1-13. [6] Khoshgoftaar, T.M., Van Hulse, J., Napolitano, A, (2011). “Comparing Boosting and Bagging techniques with Noisy and Imbalanced Data,” IEEE Transactions on Systems. Man, .and Cybernetics-Part A. Systems and Humans. Vol. 41, No. 3, pp. 552-568. [7] Breiman, L. (1996). “Bagging predictors,” Machine learning, Springer, Vol. 24, No. 2, pp. 123-140. [8] Freund, Y., & Schapire, R. E. (1996). “Experiments with a new boosting algorithm,” In Icml, Vol. 96, pp. 148-156. [9] Q. Yang and X. Wu, (2006). “10 challenging problems in data mining research,” International Journal of Information Technology & Decision Making, Vol. 5, No. 4, pp. 597–604. [10] Breiman, L., Friedman, J., Stone, C. J., & Olshen, R. A. (1984). “Classification and regression trees,” CRC press. [11] Galar, M., Fernandez, A., Barrenechea, E., Bustince, H., & Herrera, F. (2012). “A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), Vol. 42, No. 4, pp. 463-484. [12] Schapire, R. E. (1990). “The strength of weak learnability,” Machine learning, Springer, Vol. 5, No. 2, 197-227. [13] Hanifah, F. S., Wijayanto, H., & Kurnia, A. (2015). “SMOTEBagging Algorithm for Imbalanced Dataset in Logistic Regression Analysis (Case: Credit of Bank X),” Applied Mathematical Sciences, Vol. 9, No. 138, pp. 6857-6865. [14] Freund, Y., & Schapire, R. E. (1995). “A desicion-theoretic generalization of on-line learning and an application to boosting,” European Conference on Computational Learning Theory, Springer Berlin Heidelberg, pp. 23-37. [15] Khoshgoftaar, Taghi M., Moiz Golawala, and Jason Van Hulse. (2007). “An empirical study of learning from imbalanced data using random forest,” Tools with Artificial Intelligence, ICTAI 2007, 19th IEEE International Conference, Vol. 2. [16] Breiman, Leo. (2001). “ Random forests,” Machine learning, Springer, Vol. 45, No. 1, pp. 5-32. [17] Sun, Yanmin, et al. (2007). "Cost-sensitive boosting for classification of imbalanced data," Pattern Recognition, Vol 40, No. 12, pp. 3358-3378. [18] Guo, H., & Viktor, H. L. (2004). “Learning from imbalanced data sets with boosting and data generation: the databoost-IM approach,” ACM Sigkdd Explorations Newsletter, Vol. 6, No. 1, pp. 30-39. [19] Akbani, R., Kwek, S., & Japkowicz, N. (2004). “Applying support vector machines to imbalanced datasets,” In European conference on machine learning, pp. 39-50. [20] Tan, P. N. (2006). “Introduction to data mining,” Pearson Education India. [21] Therneau, T. M., Atkinson, B., & Ripley, M. B. (2010). “The rpart package.” [22] Alfaro, E., Gamez, M., & Garcia, N. (2013). “Adabag: An R package for classification with boosting and bagging,” Journal of Statistical Software, Vol. 54(2), pp. 1-35. [23] RColorBrewer, S., Liaw, A., Wiener, M., & Liaw, M. A. (2015). “Package randomForest”. [24] Chawla, N. V., Lazarevic, A., Hall, L. O., & Bowyer, K. W. (2003). “SMOTEBoost: Improving prediction of the minority class in boosting,” In European Conference on Principles of Data Mining and Knowledge Discovery, Springer Berlin Heidelberg, pp. 107-119. [25] Seiffert, C., Khoshgoftaar, T. M., Van Hulse, J., & Napolitano, A. (2010). “RUSBoost: A hybrid approach to alleviating class imbalance,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, Vol. 40, No. 1, pp. 185-197. [26] Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). “SMOTE: synthetic minority over-sampling technique,” Journal of artificial intelligence research, Vol. 16, pp. 321-357. [27] Chen, Y. W., & Lin, C. J. (2006). “Combining SVMs with various feature selection strategies,” In Feature extraction. Springer Berlin Heidelberg, pp. 315-324. [28] 邱俊仁、程俊傑、吳鋼治、邱浩彰、楊國卿、侯勝茂。(2012)。院內心肺復甦(CPR)之成效。台灣醫學,Vol. 16-1, pp. 34-39。 [29] 高靖翔。(2008)。多項分配之分類方法比較與實證研究,政治大學統計研究所碩士論文。 [30] 林承鋐。(2016)。運用基因表達規劃法於支持向量機的規則萃取,清華大學工業工程與工程管理研究所碩士論文。 [31] Swiniarski, R. W., & Skowron, A. (2003). “Rough set methods in feature selection and recognition,” Pattern recognition letters, Vol. 24, No. 6, pp. 833-849. [32] Samb, Mouhamadou Lamine. (2012). “A novel RFE-SVM-based feature selection approach for classification,” International Journal of Advanced Science and Technology, Vol. 43, pp. 27-36. [33] Su, C. T., & Li, T. S. (2002). “Neural and MTS algorithms for feature selection,” Asian Journal on Quality, Vol. 3, No. 2, pp.113-131. [34] Yang, J., & Honavar, V. (1998). “Feature subset selection using a genetic algorithm,” IEEE Intelligent Systems, Vol. 2, pp. 44-49.
|