No Cover Image

Journal article 937 views 313 downloads

Greed Is Good: Exploration and Exploitation Trade-offs in Bayesian Optimisation

George De Ath, Richard M. Everson, Jonathan E. Fieldsend, Alma Rahat Orcid Logo

ACM Transactions on Evolutionary Learning and Optimization, Volume: 1, Issue: 1, Pages: 1 - 22

Swansea University Author: Alma Rahat Orcid Logo

Check full text

DOI (Published version): 10.1145/3425501

Abstract

The performance of acquisition functions for Bayesian optimisation to locate the global optimum of continuous functions is investigated in terms of the Pareto front between exploration and exploitation. We show that Expected Improvement (EI) and the Upper Confidence Bound (UCB) always select solutio...

Full description

Published in: ACM Transactions on Evolutionary Learning and Optimization
ISSN: 2688-299X 2688-3007
Published: Association for Computing Machinery (ACM) 2021
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa55241
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract: The performance of acquisition functions for Bayesian optimisation to locate the global optimum of continuous functions is investigated in terms of the Pareto front between exploration and exploitation. We show that Expected Improvement (EI) and the Upper Confidence Bound (UCB) always select solutions to be expensively evaluated on the Pareto front, but Probability of Improvement is not guaranteed to do so and Weighted Expected Improvement does so only for a restricted range of weights. We introduce two novel ϵ-greedy acquisition functions. Extensive empirical evaluation of these together with random search, purely exploratory, and purely exploitative search on 10 benchmark problems in 1 to 10 dimensions shows that ϵ-greedy algorithms are generally at least as effective as conventional acquisition functions (e.g. EI and UCB), particularly with a limited budget. In higher dimensions ϵ-greedy approaches are shown to have improved performance over conventional approaches. These results are borne out on a real world computational fluid dynamics optimisation problem and a robotics active learning problem. Our analysis and experiments suggest that the most effective strategy, particularly in higher dimensions, is to be mostly greedy, occasionally selecting a random exploratory solution.
Item Description: Supplemental Material available as a zip file from acm.org via https://dl.acm.org/doi/10.1145/3425501
College: Faculty of Science and Engineering
Issue: 1
Start Page: 1
End Page: 22