|
[1] B. Baker, O. Gupta, N. Naik, and R. Raskar. Designing neural network architectures using reinforcement learning. In International Conference on Learning Representations, 2017. [2] G. Huang, S. Liu, L. van der Maaten, and K. Q. Weinberger. Condensenet: An efficient densenet using learned group convolutions. arXiv preprint arXiv:1711.09224, 2017. [3] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. [4] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [5] S. Liu. Condensenet: Light weighted cnn for mobile devices, 2017. [6] J. Peters and S. Schaal. Policy gradient methods for robotics. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2219–2225, 2006. [7] S. C. Steven Tartakovsky and M. McCourt. Deep learning hyperparameter optimization with competing objectives, 2017. [8] TensorFlow. Convolutional neural networks, 2018. [9] J. H. M. Thomas Elsken and F. Hutter. Multi-objective architecture search for cnns. arXiv preprint arXiv:1804.09081, 2018. [10] S. Y. Ye-Hoon Kim, Bhargava Reddy and C. Seo. Nemo: Neuro-evolution with multiobjective optimization of deep neural network for speed and accuracy. In ICML17 AutoML Workshop, 2017. [11] B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. In International Conference on Learning Representations, 2017. [12] V. V. S. J. Zoph, Barret and Q. V. Le. Learning Transferable Architectures for Scalable Image Recognition. arXiv preprint arXiv:1707.07012, 2017. |