InstitutTeam
Carolin Benjamins

Carolin Benjamins, M. Sc.

Carolin Benjamins, M. Sc.
Adresse
Appelstraße 9a
30167 Hannover
Gebäude
Raum
Carolin Benjamins, M. Sc.
Adresse
Appelstraße 9a
30167 Hannover
Gebäude
Raum

I am driven by the love for automation and making complex algorithms more accessible. Further interests are robotics, automated machine learning (AutoML), hyperparameter optimization (HPO) and especially Bayesian Optimization (BO) as well as reinforcement learning and meta-learning.

I am also one of the developers of our HPO package SMAC.

Research Interests

  • Dynamic Algorithm Configuration
  • Bayesian Optimization
  • Contextual Reinforcement Learning
  • Meta-Reinforcement Learning

Curriculum Vitae

  • Education

    since 2020: Doctoral Researcher at the Leibniz University Hannover

    2017 - 2020: M.Sc. Mechatronics & Robotics at the Leibniz University Hannover. Thesis: Fast, Advanced and Low User Effort Object Detection for Robotic Applications. Supervisor: Prof. Dr.-Ing. Tobias Ortmaier

    2014 - 2017: B.Sc. Mechatronics & Robotics at the Leibniz University Hannover. Thesis: Analysis of Neural Networks for Segmentation of Image Data. Supervisor: Prof. Dr.-Ing. Eduard Reithmeier

Publications

  • Carolin Benjamins, Theresa Eimer, Frederik Schubert, Aditya Mohan, André Biedenkapp, Bodo Rosenhahn, Frank Hutter, Marius Lindauer (2022): Contextualize Me - The Case for Context in Reinforcement LearningArXiv Preprint
    arXiv: https://arxiv.org/abs/2202.04500
  • Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhkopf, René Sass, Frank Hutter (2022): SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter OptimizationJournal of Machine Learning Research (JMLR) -- MLOSS, Vol. 23, No. 54, pp. 1-9 Weitere Informationen
  • Frederik Schubert, Carolin Benjamins, Sebastian Döhler, Bodo Rosenhahn, Marius Lindauer (2022): POLTER: Policy Trajectory Ensemble Regularization for Unsupervised Reinforcement LearningCoRR
    arXiv: arXiv:2205.11357
  • Benjamins, Carolin and Raponi, Elena and Jankovic, Anja and van der Blom, Koen and Santoni, Maria Laura and Lindauer, Marius and Doerr, Carola (2022): PI is back! Switching Acquisition Functions in Bayesian Optimization2022 NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems
    arXiv: 2211.01455
  • Carolin Benjamins, Anja Jankovic, Elena Raponi, Koen van der Blom, Marius Lindauer, Carola Doerr (2022): Towards Automated Design of Bayesian Optimization via Exploratory Landscape Analysis6th Workshop on Meta-Learning at NeurIPS 2022, New Orleans
  • Carolin Benjamins, Theresa Eimer, Frederik Schubert, André Biedenkapp, Bodo Rosenhahn, Frank Hutter, Marius Lindauer (2021): CARL: A Benchmark for Contextual and Adaptive Reinforcement LearningWorkshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021
    arXiv: 2110.02102
  • Theresa Eimer, Carolin Benjamins, Marius Lindauer (2021): Hyperparameters in Contextual RL are Highly SituationalInternational Workshop on Ecological Theory of RL (at NeurIPS) | Datei |

Projects

  • Dynamic Algorithm Configuration
    Da die Konfigurationen während der Laufzeit in Abhängigkeit vom aktuellen Zustand des Algorithmus ausgewählt werden sollten, kann es als ein Problem des Reinforcement Learning (RL) betrachtet werden, bei dem ein Agent in jedem Zeitschritt die zu verwendende Konfiguration auf der Grundlage der Leistung im letzten Schritt und des aktuellen Zustands des Algorithmus auswählt. Dies ermöglicht uns einerseits den Einsatz leistungsfähiger RL-Methoden, andererseits bringt RL auch eine Reihe von Herausforderungen mit sich, wie Instabilität, Rauschen und Ineffizienz bei der Abtastung, die bei Anwendungen wie DAC angegangen werden müssen. Daher umfasst die Forschung zu DAC auch die Forschung zu zuverlässigem, interpretierbarem, allgemeinem und schnellem Reinforcement Learning.
    Leitung: Prof. Dr. Marius Lindauer
    Jahr: 2019
    Förderung: DFG
    Laufzeit: 2019-2023
  • Dynamic Algorithm Configuration
    As configurations should be chosen during runtime depending on the current algorithm state, it can be viewed as a reinforcement learning (RL) problem where at each timestep an agent selects the configuration to use based on the performance in the last step and the current state of the algorithm. This enables us to use powerful RL methods on one hand; on the other, RL also brings a set of challenges like instability, noise and sample inefficiency that need to be addressed in applications such as DAC. Therefore research on DAC also includes research on reliable, interpretable, general and fast reinforcement learning.
    Leitung: Prof. Dr. Marius Lindauer
    Jahr: 2019
    Förderung: DFG
    Laufzeit: 2019-2023