|
[1] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecturesearch. InProc. Int. Conf. Learning Representation (ICLR), 5 2019. [2] Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecturesearch: Bridging the depth gap between search and evaluation. InProc. IEEE Int.Conf. Computer Vision (ICCV), 10 2019. [3] Yuhui Xu, Lingxi Xie, Xiaopeng Zhang, Xin Chen, Guo-Jun Qi, Qi Tian, andHongkai Xiong.{PC}-{darts}: Partial channel connections for memory-efficientarchitecture search. InProc. Int. Conf. Learning Representation (ICLR), 4 2020. [4] Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecturesearch on target task and hardware. InProc. Int. Conf. Learning Representation(ICLR), 5 2019. [5] Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: Stochastic neuralarchitecture search. InProc. Int. Conf. Learning Representation (ICLR), 5 2019. [6] B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. InProc. Int. Conf. Learning Representation (ICLR), 4 2017. [7] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architecturesfor scalable image recognition. InProc. IEEE Conf. Computer Vision and PatternRecognition (CVPR), pages 8697–8710, 6 2018. [8] Renqian Luo, Fei Tian, Tao Qin, Enhong Chen, and Tie-Yan Liu. Neural architectureoptimization. InAdvances in Neural Information Processing Systems (NeurIPS),pages 7816–7827, 12 2018. [9] H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean. Efficient neural architecturesearch via parameter sharing. InProc. Int. Conf. Machine Learning (ICML), pages4092–4101, 7 2018. [10] A. Brock, T. Lim, J. M. Ritchie, and N. Weston. Smash: One-shot model architecturesearch through hypernetworks. InProc. Int. Conf. Learning Representation (ICLR),5 2018. [11] K. Kandasamy, W. Neiswanger, J. Schneider, B. Poczos, and E. Xing. Neural ar-chitecture search with bayesian optimisation and optimal transport. InAdvances inNeural Information Processing Systems (NeurIPS), page 2020–2029, 12 2018. [12] Hongpeng Zhou, Minghao Yang, Jun Wang, and Wei Pan. Bayesnas: A bayesian approach for neural architecture search. InProc. Int. Conf. Machine Learning (ICML),pages 7603–7613, 6 2019. [13] Mingxing Tan, Bo Chen, Ruoming Pang, V. Vasudevan, M. Sandler, A. Howard, andQ. V. Le. Mnasnet: Platform-aware neural architecture search for mobile. InProc.IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 2820–2828, 62019. [14] Jieru Mei, Yingwei Li, Xiaochen Lian, Xiaojie Jin, Linjie Yang, A. Yuille, and Jian-chao Yang. Atomnas: Fine-grained end-to-end neural architecture search. InProc.Int. Conf. Learning Representation (ICLR), 4 2020. [15] G. Bender, P.-J. Kindermans, B. Zoph, V. Vasudevan, and Q. V. Le. Understandingand simplifying one-shot architecture search. InProc. Int. Conf. Machine Learning(ICML), pages 549–558, 7 2018. [16] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le. Regularized evolution for imageclassifier architecture search. InAssociation for the Advancement of Artificial Intel-ligence (AAAI), pages 4780–4789, 1 2019. [17] Hanxiao Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu. Hierar-chical representations for efficient architecture search. InProc. Int. Conf. LearningRepresentation (ICLR), 4 2018. [18] Lingxi Xie and Yuille A. Genetic cnn. InProc. IEEE Int. Conf. Computer Vision(ICCV), pages 1379–1388, 10 2017. [19] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, Jie Tan, Q. V. Le, andA. Kurakin. Large-scale evolution of image classifiers. InProc. Int. Conf. MachineLearning (ICML), pages 2902–2911, 8 2017. [20] T. Elsken, J. H. Metzen, and F. Hutter. Efficient multi-objective neural architec-ture search via lamarckian evolution. InProc. Int. Conf. Learning Representation(ICLR), 5 2019. [21] Zhao Zhong, Junjie Yen, Wei Wu, Jing Shao, and Cheng-Lin Liu. Pratical block-wise neural network architecture generation. InProc. IEEE Conf. Computer Visionand Pattern Recognition (CVPR), pages 2423–2432, 6 2018. [22] B. Baker, O. Gupta, N. Nalik, and R. Raskar. Designing neural network architecturesusing reinforcement learning. InProc. Int. Conf. Learning Representation (ICLR),4 2017. [23] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images.Technical Report, Citeseer, 2009. [24] L. Li and A. Talwalkar. Random search and reproducibility for neural architecturesearch. InProc. Conf. Uncertainty in Artificial Intelligence (UAI), page 129, 7 2019. [25] A. Zela, T. Elsken, T. Saikia, Y. Marrakchi, T. Brox, and F. Hutter. Understandingand robustifying differentiable architecture search. InProc. Int. Conf. LearningRepresentation (ICLR), 4 2020. [26] Chenxi Liu, B. Zoph, M. Neumann, J. Shlens, Wei Hua, Li-Jia Li, Fei-Fei Li, A. L.Yuille, J. Huang, and K. Murphy. Progressive neural architecture search. InProc.IEEE Eur. Conf. Computer Vision (ECCV), pages 19–35, 9 2018. [27] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov.Dropout: A simple way to prevent neural networks from overfitting.Journal ofMachine Learning Research (JMLR), 15:1929–1958, 2014. [28] I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient descent with warm restarts.arXiv:1608.03983, 8 2016. [29] D. P. Kingma and J. Ba.Adam:A method for stochastic optimization.arXiv:1412.6980, 12 2014. [30] T. DeVries and G. W. Taylor. Improved regularization of convolutional neural net-works with cutout.arXiv:1708.04552, 8 2017. [31] Xiangxiang Chu, Tianbao Zhou, Bo Zhang, and Jixiang Li. Fair DARTS: Eliminat-ing Unfair Advantages in Differentiable Architecture Search.arXiv:1911.12126, 11 2019. |