InstituteStaff
Carolin Benjamins

Carolin Benjamins, M. Sc.

Carolin Benjamins, M. Sc.
Address
Appelstraße 9a
30167 Hannover
Building
Room
Carolin Benjamins, M. Sc.
Address
Appelstraße 9a
30167 Hannover
Building
Room

I am driven by the love for automation and making complex algorithms more accessible. Further interests are robotics, automated machine learning (AutoML), hyperparameter optimization (HPO) and especially Bayesian Optimization (BO) as well as reinforcement learning and meta-learning.

I am also one of the developers of our HPO package SMAC.

Research Interests

  • Dynamic Algorithm Configuration
  • Bayesian Optimization
  • Contextual Reinforcement Learning
  • Meta-Reinforcement Learning

Curriculum Vitae

  • Education

    since 2020: Doctoral Researcher at the Leibniz University Hannover

    2017 - 2020: M.Sc. Mechatronics & Robotics at the Leibniz University Hannover. Thesis: Fast, Advanced and Low User Effort Object Detection for Robotic Applications. Supervisor: Prof. Dr.-Ing. Tobias Ortmaier

    2014 - 2017: B.Sc. Mechatronics & Robotics at the Leibniz University Hannover. Thesis: Analysis of Neural Networks for Segmentation of Image Data. Supervisor: Prof. Dr.-Ing. Eduard Reithmeier

Publications


2022


Benjamins, C., Eimer, T., Schubert, F., Mohan, A., Biedenkapp, A., Rosenhahn, B., Hutter, F., & Lindauer, M. (2022). Contextualize Me -- The Case for Context in Reinforcement Learning.

doi.org/10.48550/arXiv.2202.04500

Benjamins, C., Raponi, E., Jankovic, A., Blom, K. V. D., Santoni, M. L., Lindauer, M., & Doerr, C. (2022). PI is back! Switching Acquisition Functions in Bayesian Optimization. In 2022 NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems

arxiv.org/abs/2211.01455

Benjamins, C., Jankovic, A., Raponi, E., Blom, K. V. D., Lindauer, M., & Doerr, C. (2022). Towards Automated Design of Bayesian Optimization via Exploratory Landscape Analysis. In 6th Workshop on Meta-Learning at NeurIPS 2022

Lindauer, M., Eggensperger, K., Feurer, M., Biedenkapp, A., Deng, D., Benjamins, C., Sass, R., & Hutter, F. (2022). SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization. Journal of Machine Learning Research.

arxiv.org/abs/2109.09831

Schubert, F., Benjamins, C., Döhler, S., Rosenhahn, B., & Lindauer, M. (2022). POLTER: Policy Trajectory Ensemble Regularization for Unsupervised Reinforcement Learning.

doi.org/10.48550/arXiv.2205.11357


2021


Benjamins, C., Eimer, T., Schubert, F., Biedenkapp, A., Rosenhahn, B., Hutter, F., & Lindauer, M. (2021). CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning. In Workshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021

arxiv.org/abs/2110.02102

Eimer, T., Benjamins, C., & Lindauer, M. T. (2021). Hyperparameters in Contextual RL are Highly Situational. In International Workshop on Ecological Theory of RL (at NeurIPS)


Projects

  • Dynamic Algorithm Configuration
    Da die Konfigurationen während der Laufzeit in Abhängigkeit vom aktuellen Zustand des Algorithmus ausgewählt werden sollten, kann es als ein Problem des Reinforcement Learning (RL) betrachtet werden, bei dem ein Agent in jedem Zeitschritt die zu verwendende Konfiguration auf der Grundlage der Leistung im letzten Schritt und des aktuellen Zustands des Algorithmus auswählt. Dies ermöglicht uns einerseits den Einsatz leistungsfähiger RL-Methoden, andererseits bringt RL auch eine Reihe von Herausforderungen mit sich, wie Instabilität, Rauschen und Ineffizienz bei der Abtastung, die bei Anwendungen wie DAC angegangen werden müssen. Daher umfasst die Forschung zu DAC auch die Forschung zu zuverlässigem, interpretierbarem, allgemeinem und schnellem Reinforcement Learning.
    Led by: Prof. Dr. Marius Lindauer
    Year: 2019
    Funding: DFG
    Duration: 2019-2023
  • Dynamic Algorithm Configuration
    As configurations should be chosen during runtime depending on the current algorithm state, it can be viewed as a reinforcement learning (RL) problem where at each timestep an agent selects the configuration to use based on the performance in the last step and the current state of the algorithm. This enables us to use powerful RL methods on one hand; on the other, RL also brings a set of challenges like instability, noise and sample inefficiency that need to be addressed in applications such as DAC. Therefore research on DAC also includes research on reliable, interpretable, general and fast reinforcement learning.
    Led by: Prof. Dr. Marius Lindauer
    Year: 2019
    Funding: DFG
    Duration: 2019-2023