|
[1] Dimitris Bertsimas, Ryan Cory-Wright, and Nicholas A. G. Johnson. Sparse plus low rank matrix decomposition: A discrete optimization approach, 2023. [2] Jian-Feng Cai, Jingyang Li, and Dong Xia. Generalized low-rank plus sparse tensor estimation by fast riemannian optimization, 2022. [3] Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation, 2014. [4] Kailing Guo, Xiaona Xie, Xiangmin Xu, and Xiaofen Xing. Compressing by learning in a low-rank and sparse decomposition form. IEEE Access, 7:150823–150832, 2019. [5] Song Han, Jeff Pool, Sharan Narang, Huizi Mao, Enhao Gong, Shijian Tang,Erich Elsen, Peter Vajda, Manohar Paluri, John Tran, Bryan Catanzaro, and William J. Dally. Dsd: Dense-sparse-dense training for deep neural networks, 2017. [6] Cole Hawkins, Haichuan Yang, Meng Li, Liangzhen Lai, and Vikas Chandra. Low-rank+sparse tensor compression for neural networks, 2021. [7] Wenqi Huang, Ziwen Ke, Zhuo-Xu Cui, Jing Cheng, Zhilang Qiu, Sen Jia, Leslie Ying, Yanjie Zhu, and Dong Liang. Deep low-rank plus sparse network for dynamic mr imaging, 2021. [8] Yerlan Idelbayev and Miguel A. Carreira-Perpinan. Low-rank compression of neural nets: Learning the rank of each layer. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8046–8056, 2020. [9] Pavel Kaloshin. Convolutional neural networks compression with low rank and sparse tensor decompositions, 2020. [10] Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Compression of deep convolutional neural networks for fast and low power mobile applications, 2016. [11] Lucas Liebenwein, Alaa Maalouf, Oren Gal, Dan Feldman, and Daniela Rus. Compressing neural networks: Towards determining the optimal layer-wise decomposition. CoRR, abs/2107.11442, 2021. [12] Tao Lin, Sebastian U. Stich, Luis Barba, Daniil Dmitriev, and Martin Jaggi. Dynamic model pruning with feedback, 2020. [13] Ricardo Otazo, Emmanuel Candès, and Daniel Sodickson. Low-rank plus sparse matrix decomposition for accelerated dynamic mri with separation of background and dynamic components. Magnetic Resonance in Medicine, 73, 04 2014. [14] Miao Yin, Huy Phan, Xiao Zang, Siyu Liao, and Bo Yuan. Batude: Budgetaware neural network compression based on tucker decomposition. Proceedings of the AAAI Conference on Artificial Intelligence, 36:8874–8882, 06 2022. [15] Xiyu Yu, Tongliang Liu, Xinchao Wang, and Dacheng Tao. On compressing deep models by low rank and sparse decomposition. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 67–76, 2017. [16] Xiao Zhang, Lingxiao Wang, and Quanquan Gu. A unified framework for low-rank plus sparse matrix recovery, 2018.
|