|
[1] Hyunghun Cho, Yongjin Kim, Eunjung Lee, Daeyoung Choi, Yongjae Lee, and Wonjong Rhee. Basic enhancement strategies when using bayesian optimization for hyperparameter tuning of deep neural networks. IEEE Access, 8:52588–52608, 2020. [2] Shenghong Ju, Takuma Shiga, Lei Feng, Zhufeng Hou, Koji Tsuda, and Junichiro Shiomi. Designing nanostructures for phonon transport via bayesian optimization. Physical Review X, 7(2):021024, 2017. [3] Alonso Marco, Felix Berkenkamp, Philipp Hennig, Angela P Schoellig, Andreas Krause, Stefan Schaal, and Sebastian Trimpe. Virtual vs. real: Trading off simulations and physical experiments in reinforcement learning with bayesian optimization. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 1557–1563. IEEE, 2017. [4] Peter I Frazier. A tutorial on bayesian optimization. arXiv preprint arXiv:1807.02811, 2018. [5] Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential modelbased optimization for general algorithm configuration. In International conference on learning and intelligent optimization, pages 507–523. Springer, 2011. [6] Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, and Frank Hutter. Bayesian optimization with robust bayesian neural networks. Advances in neural information processing systems, 29, 2016. [7] James Bergstra, R´emi Bardenet, Yoshua Bengio, and Bal´azs K´egl. Algorithms for hyper-parameter optimization. Advances in neural information processing systems, 24, 2011. [8] Stefan Falkner, Aaron Klein, and Frank Hutter. Bohb: Robust and efficient hyperparameter optimization at scale. In International Conference on Machine Learning, pages 1437–1446. PMLR, 2018. [9] Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. The Journal of Machine Learning Research, 18(1):6765–6816, 2017. [10] Jacob Gardner, Chuan Guo, Kilian Weinberger, Roman Garnett, and Roger Grosse. Discovering and exploiting additive structure for bayesian optimization. In Artificial Intelligence and Statistics, pages 1311–1319. PMLR, 2017. [11] Kirthevasan Kandasamy, Jeff Schneider, and Barnab´as P´oczos. High dimensional bayesian optimisation and bandits via additive models. In International conference on machine learning, pages 295–304. PMLR, 2015. [12] Paul Rolland, Jonathan Scarlett, Ilija Bogunovic, and Volkan Cevher. Highdimensional bayesian optimization via additive models with overlapping groups. In International conference on artificial intelligence and statistics, pages 298–307. PMLR, 2018. [13] ChangYong Oh, Efstratios Gavves, and Max Welling. Bock: Bayesian optimization with cylindrical kernels. In International Conference on Machine Learning, pages 3868–3877. PMLR, 2018. [14] Amin Nayebi, Alexander Munteanu, and Matthias Poloczek. A framework for bayesian optimization in embedded subspaces. In International Conference on Machine Learning, pages 4752–4761. PMLR, 2019. [15] Zi Wang, Clement Gehring, Pushmeet Kohli, and Stefanie Jegelka. Batched large-scale bayesian optimization in high-dimensional spaces. In International Conference on Artificial Intelligence and Statistics, pages 745–754. PMLR, 2018. [16] David Eriksson, Michael Pearce, Jacob Gardner, Ryan D Turner, and Matthias Poloczek. Scalable global optimization via local bayesian optimization. Advances in Neural Information Processing Systems, 32, 2019. [17] Nikolaus Hansen. The cma evolution strategy: a comparing review. Towards a new evolutionary computation, pages 75–102, 2006. [18] Cl´ement Chevalier and David Ginsbourger. Fast computation of the multipoints expected improvement with applications in batch selection. In International Conference on Learning and Intelligent Optimization, pages 59–69. Springer, 2013. [19] Andreas Kirsch, Joost Van Amersfoort, and Yarin Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. Advances in neural information processing systems, 32, 2019. [20] Jos´e Miguel Hern´andez-Lobato, James Requeima, Edward O Pyzer-Knapp, and Al´an Aspuru-Guzik. Parallel and distributed thompson sampling for large-scale accelerated exploration of chemical space. In International conference on machine learning, pages 1470–1479. PMLR, 2017. [21] Kirthevasan Kandasamy, Akshay Krishnamurthy, Jeff Schneider, and Barnab´as P´oczos. Parallelised bayesian optimisation via thompson sampling. In International Conference on Artificial Intelligence and Statistics, pages 133–142. PMLR, 2018. [22] Nikolaus Hansen. The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772, 2016. [23] Nikolaus Hansen, Sibylle D M¨uller, and Petros Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). Evolutionary computation, 11(1):1–18, 2003. [24] Il’ya Meerovich Sobol’. On the distribution of points in a cube and the approximate evaluation of integrals. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 7(4):784–802, 1967. [25] Ya-xiang Yuan. A review of trust region algorithms for optimization. In Iciam, volume 99-1, pages 271–282, 2000. [26] William R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3-4): 285–294, 1933. [27] Nikolaus Hansen, Youhei Akimoto, and Petr Baudis. Cma-es/pycma on github. zenodo. https://github.com/CMA-ES/pycma, Feb 2019. DOI:10.5281/zenodo.2559634. [28] Jacob Gardner, Geoff Pleiss, Kilian Q Weinberger, David Bindel, and Andrew G Wilson. Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration. Advances in neural information processing systems, 31, 2018. [29] Kun Dong, David Eriksson, Hannes Nickisch, David Bindel, and Andrew GWilson. Scalable log determinants for gaussian process kernel learning. Advances in Neural Information Processing Systems, 30, 2017. [30] Marco Taboga. Marginal and conditional distributions of a multivariate normal vector. https://www.statlect.com/probability-distributions/multivariate-normal-distribution-partitioning, 2021.
|