|
Amaldi, E. and Kann, V. (1998). On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems. Theoretical Computer Science, 209(1), 237–260. Breiman, L. (2001). Random Forests. Machine Learning, 45(1), 5–32. Breiman, L., Friedman, J., Olshen, R. and Stone, C. (1984). Classification and regression trees. Wadsworth Advanced Books and Software, Belmont, CA. Chen, T. and Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785-794. Chen, T. and He, T. (2015). Higgs boson discovery with boosted trees. NIPS Workshop on High-Energy Physics and Machine Learning, 69-80. Chen, Y.-L, Dai, C.-S and Ing, C.-K (2019). High-Dimensional Model Selection via Chebyshev Greedy Algorithms. Working paper. Fan, J., and Li, R. (2001). Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties. Journal of the American Statistical Association, 1348–1360. Hastie, T., Tibshirani, R. and Friedman, J. H. (2009). The elements of statistical learning, 2nd ed. Springer, New York. Ing, C.-K and Lai, T. L. (2011). A stepwise regression method and consistent model selection for high-dimensional sparse linear models. Statistica Sinica, 21, 1473–1513. Lin, S. C. (2018). High-dimensional location-dispersion models with application to root cause analysis in wafer fabrication processes. Master’s Thesis of institute of Statistics. Hsinchu: National Tsing Hua University. Retrieved from https://hdl.handle.net/ 11296/zh226d Liu, B., Wei, Y., Zhang, Y. and Yang Q. (2017). Deep neural networks for high dimension, low sample size data. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 2287–2293. Lundberg, S. M. and Lee, S.-I (2017). A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems, 4768-4777. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B (Methodological), 267–288. Xu, Z., Huang, G., Weinberger, K. Q. and Zheng, A. X. (2014). Gradient boosted feature selection. Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 522-531. Yamada, M., Jitkrittum, W., Sigal, L., Xing, E.P. and Sugiyama M. (2014). Highdimensional feature selection by feature-wise kernelized lasso. Neural computation, 26(1), 185–207. Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476), 1417–1429. |