Hyperparameter Optimization
Hyperparameter Optimization (HPO) aims at finding a wellperforming hyperparameter configuration of a given machine learning model on a dataset at hand, including the machine learning model, its hyperparameters and other data processing steps. Thus, HPO frees the human expert from a tedious and errorprone hyperparameter tuning process.

Bayesian Optimization
The loss landscape of a HPO problem is typically unknown (e.g., we need to solve a blackbox function) and expensive to evaluate. Bayesian Optimization (BO) is designed as a global optimization strategy for expensive blackbox functions. BO first estimates the shape of the target loss landscape with a surrogate model and then suggests the configuration to be evaluated in the next iteration. By trading off exploitation and exploration based on the surrogate model, it is well known for its sample efficiency.

Combined Algorithms Selection and Hyperparameter Optimization (CASH)
An AutoML system needs to select not only the optimal hyperparameter configuration of a given model, but also which model to be used. This problem can be regarded as a single HPO problem with a hierarchy configuration space, where the toplevel hyperparameter decides which algorithm to choose and all other hyperparameters depend on this one. To deal with such complex and structured configuration spaces, we apply for example random forests as surrogate models in Bayesian Optimization.

MultiFidelity HPO
The increasing data size and model complexity makes it even harder to find a reasonable configuration within a limited computational or time budget. MultiFidelity techniques in general approximate the true value of an expensive blackbox function with a cheap (maybe noisy) evaluation proxy and thus, increase the efficiency of HPO approaches substantially. For example, we can use a small subset of the dataset or train a DNN for only a few epochs.

HPO Benchmarks
Evaluation of AutoML and especially of HPO facesmany challenges. For example, many repeated runs of HPO can be computationally expensive, the benchmarks can be fairly noisy, and it is often not clear which benchmarks are representative for typical HPO applications. Therefore, we develop HPO benchmark collections that improve reproducibility and decrease the computational burden on researchers.
Neural Architecture Search
Neural Architecture Search (NAS) automates the process of architecture design of neural networks. NAS approaches optimize the topology of the networks, incl. how to connect nodes and which operators to choose. Userdefined optimization metrics can thereby include accuracy, model size or inference time to arrive at an optimal architecture for specific applications. Due to the extremely large search space, traditional evolution or reinforcement learningbased AutoML algorithms tend to be computationally expensive. Hence recent research on the topic has focused on exploring more efficient ways for NAS. In particular, recently developed gradientbased and multifidelity methods have provided a promising path and boosted research in these directions. Our group has been very active in developing state of the art NAS methods and has been at the forefront of driving NAS research forward.
Algorithm Configuration
The algorithm configuration problem is to determine a wellperforming parameter configuration of a given algorithm across a given set of instances.

Dynamic Algorithm Configuration
Hyperparameter optimization is a powerful approach to achieve the best performance on many different problems. However, classical approaches to solve this problem all ignore the iterative nature of many algorithms. Dynamic algorithm configuration (DAC) is capable of generalizing over prior optimization approaches, as well as handling optimization of hyperparameters that need to be adjusted over multiple timesteps. To allow us to use this framework, we need to move from the classical view of algorithms as a blackbox to more of a gray or even whitebox view to unleash the full potential of AI algorithms with DAC.