|
[1] Michal Aharon, Michael Elad, and Alfred Bruckstein. k-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11):4311–4322, 2006. [2] Chenglong Bao, Jian-Feng Cai, and Hui Ji. Fast sparsity-based orthogonal dictionary learning for image restoration. In Proceedings of the IEEE International Conference on Computer Vision, pages 3384–3391, 2013. [3] Richard Baraniuk, Mark Davenport, Ronald DeVore, and Michael Wakin. A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3):253–263, 2008. [4] Alfred M Bruckstein, David L Donoho, and Michael Elad. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Review, 51(1):34–81, 2009. [5] Kurt Bryan and Tanya Leise. Making do with less: an introduction to compressed sensing. SIAM Review, 55(3):547–566, 2013. [6] Christopher JC Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121–167, 1998. [7] Robert Calderbank and Sina Jafarpour. Finding needles in compressed haystacks. In Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, pages 3441–3444. IEEE, 2012. [8] Robert Calderbank, Sina Jafarpour, and Robert Schapire. Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain. preprint, 2009. [9] Emmanuel J Candes and Terence Tao. Decoding by linear programming. IEEE Transactions on Information Theory, 51(12):4203–4215, 2005. [10] Chien-Chung Chang and Yuh-Jye Lee. Generating the reduced set by systematic sampling. In IDEAL, pages 720–725. Springer, 2004. [11] Li-Jen Chien, Chien-Chung Chang, Yuh-Jye Lee, et al. Variant methods of reduced set selection for reduced support vector machines. J. Inf. Sci. Eng., 26(1):183–196, 2010. [12] Adam Coates and Andrew Y Ng. Learning feature representations with k-means. In Neural Networks: Tricks of the Trade, pages 561–580. Springer, 2012. [13] Sanjoy Dasgupta and Anupam Gupta. An elementary proof of a theorem of johnson and lindenstrauss. Random Structures Algorithms, 22(1):60–65, 2003. [14] David L Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006. [15] David L Donoho and Michael Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via l1 minimization. Proceedings of the National Academy of Sciences, 100(5):2197–2202, 2003. [16] Bradley Efron, Trevor Hastie, Iain Johnstone, Robert Tibshirani, et al. Least angle regression. The Annals of statistics, 32(2):407–499, 2004. [17] Kjersti Engan, Sven Ole Aase, and J Hakon Husoy. Method of optimal directions for frame design. In Acoustics, Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE International Conference on, volume 5, pages 2443–2446. IEEE, 1999. [18] Elaine T Hale, Wotao Yin, and Yin Zhang. A fixed-point continuation method for l1-regularized minimization with applications to compressed sensing. CAAM TR07-07, Rice University, 43:44, 2007. [19] Lih-Ren Jen and Yuh-Jye Lee. Clustering model selection for reduced support vector machines. In IDEAL, pages 714–719. Springer, 2004. [20] Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y Ng. Efficient sparse coding algorithms. In Advances in Neural Information Processing Systems, pages 801–808, 2007. [21] Yuh-Jye Lee and Su-Yun Huang. Reduced support vector machines: A statistical theory. IEEE Transactions on Neural Networks, 18(1):1–13, 2007. [22] Yuh-Jye Lee, Hung-Yi Lo, and Su-Yun Huang. Incremental reduced support vector machines. In International Conference on Informatics Cybernetics and System (ICICS 2003), 2003. [23] Yuh-Jye Lee and Olvi L Mangasarian. Rsvm: Reduced support vector machines. In Proceedings of the 2001 SIAM International Conference on Data Mining, pages 1–17. SIAM, 2001. [24] Yuh-Jye Lee and Olvi L Mangasarian. Ssvm: A smooth support vector machine for classification. Computational optimization and Applications, 20(1):5–22, 2001. [25] Yuh-Jye Lee, Yi-Ren Yeh, and Hsing-Kuo Pao. Introduction to support vector machines and their applications in bankruptcy prognosis. In Handbook of Computational Finance, pages 731–761. Springer, 2012. [26] Kuan-Ming Lin and Chih-Jen Lin. A study on reduced support vector machines. IEEE Transactions on Neural Networks, 14(6):1449–1459, 2003. [27] Julien Mairal, Francis Bach, and Jean Ponce. Task-driven dictionary learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(4):791–804, 2012. [28] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online dictionary learning for sparse coding. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 689–696. ACM, 2009. [29] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11(Jan):19–60, 2010. [30] Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro, and Andrew Zisserman. Discriminative learned dictionaries for local image analysis. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1–8. IEEE, 2008. [31] Stéphane Mallat. A wavelet tour of signal processing. Academic press, 1999. [32] Olvi L Mangasarian. Generalized support vector machines. Advances in Neural Information Processing Systems, pages 135–146, 1999. [33] Bruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategy employed by v1. Vision research, 37(23):3311–3325, 1997. [34] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. Self-taught learning: transfer learning from unlabeled data. In Proceedings of the 24th Annual International Conference on Machine Learning, pages 759–766. ACM, 2007. [35] Alex J Smola and Bernhard Schölkopf. Learning with kernels. GMDForschungszentrum Informationstechnik, 1998. [36] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996. [37] Joel A Tropp and Anna C Gilbert. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory, 53(12):4655–4666, 2007. [38] Joel A Tropp and Stephen J Wright. Computational methods for sparse solution of linear inverse problems. Proceedings of the IEEE, 98(6):948–958, 2010. [39] Vladimir Vapnik. The nature of statistical learning theory. Springer science business media, 2013. [40] Christopher KI Williams and Matthias Seeger. Using the nyström method to speed up kernel machines. In Advances in Neural information Processing Systems, pages 682–688, 2001. [41] John Wright, Allen Y Yang, Arvind Ganesh, S Shankar Sastry, and Yi Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210–227, 2009. 43 |