Details zu Publikationen

HPOBench

A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO

verfasst von
Katharina Eggensperger, Philipp Müller, Neeratyoy Mallik, Matthias Feurer, René Sass, Noor Awad, Marius Lindauer, Frank Hutter
Abstract

To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications. Over the last years, the number of efficient algorithms and tools for HPO grew substantially. At the same time, the community is still lacking realistic, diverse, computationally cheap, and standardized benchmarks. This is especially the case for multi-fidelity HPO methods. To close this gap, we propose HPOBench, which includes 7 existing and 5 new benchmark families, with a total of more than 100 multi-fidelity benchmark problems. HPOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers. It also provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations. To demonstrate HPOBench's broad compatibility with various optimization tools, as well as its usefulness, we conduct an exemplary large-scale study evaluating 13 optimizers from 6 optimization tools. We provide HPOBench here: github.com/automl/HPOBench.

Organisationseinheit(en)
Forschungszentrum L3S
Fachgebiet Maschinelles Lernen
Institut für Informationsverarbeitung
Externe Organisation(en)
Albert-Ludwigs-Universität Freiburg
Bosch Center for Artificial Intelligence (BCAI)
Typ
Aufsatz in Konferenzband
Anzahl der Seiten
36
Publikationsdatum
2021
Publikationsstatus
Elektronisch veröffentlicht (E-Pub)
Peer-reviewed
Ja
Elektronische Version(en)
https://arxiv.org/abs/2109.06716 (Zugang: Offen)
https://openreview.net/forum?id=1k4rJYEwda- (Zugang: Offen)