Publication Details

Towards Self-Adjusting Weighted Expected Improvement for Bayesian Optimization

authored by
Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr, Marius Lindauer
Abstract

In optimization, we often encounter expensive black-box problems
with unknown problem structures. Bayesian Optimization (BO) is
a popular, surrogate-assisted and thus sample-efficient approach
for this setting. The BO pipeline itself is highly configurable with
many different design choices regarding the initial design, surrogate
model and acquisition function (AF). Unfortunately, our understand-
ing of how to select suitable components for a problem at hand is
very limited. In this work, we focus on the choice of the AF, whose
main purpose it is to balance the trade-off between exploring re-
gions with high uncertainty and those with high promise for good
solutions. We propose Self-Adjusting Weighted Expected Improve-
ment (SAWEI), where we let the exploration-exploitation trade-off
self-adjust in a data-driven manner based on a convergence crite-
rion for BO. On the BBOB functions of the COCO benchmark, our
method performs favorably compared to handcrafted baselines and
serves as a robust default choice for any problem structure. With
SAWEI, we are a step closer to on-the-fly, data-driven and robust
BO designs that automatically adjust their sampling behavior to
the problem at hand.

Organisation(s)
Machine Learning Section
Institute of Artificial Intelligence
External Organisation(s)
Computer Lab of Paris 6 (Lip6)
Sorbonne Université
Centre national de la recherche scientifique (CNRS)
Type
Conference contribution
Publication date
2023
Publication status
Accepted/In press
Peer reviewed
Yes