



I am driven by the love for automation and making complex algorithms more accessible. Further interests are robotics, automated machine learning (AutoML), hyperparameter optimization (HPO) and especially Bayesian Optimization (BO) as well as reinforcement learning and meta-learning.
I am also one of the developers of our HPO package SMAC.
Research Interests
- Dynamic Algorithm Configuration
- Bayesian Optimization
- Contextual Reinforcement Learning
- Meta-Reinforcement Learning
Curriculum Vitae
-
Education & Working Experience
since 2020: Doctoral Researcher and PhD Student at the Leibniz University Hannover
2017 - 2020: M.Sc. Mechatronics & Robotics at the Leibniz University Hannover. Thesis: Fast, Advanced and Low User Effort Object Detection for Robotic Applications. Supervisor: Prof. Dr.-Ing. Tobias Ortmaier
2014 - 2017: B.Sc. Mechatronics & Robotics at the Leibniz University Hannover. Thesis: Analysis of Neural Networks for Segmentation of Image Data. Supervisor: Prof. Dr.-Ing. Eduard Reithmeier
Publications
2023
Benjamins, C., Eimer, T., Schubert, F. G., Mohan, A., Döhler, S., Biedenkapp, A., Rosenhahn, B., Hutter, F., & Lindauer, M. (Accepted/In press). Contextualize Me – The Case for Context in Reinforcement Learning. Transactions on Machine Learning Research.
Benjamins, C., Raponi, E., Jankovic, A., Doerr, C., & Lindauer, M. (Accepted/In press). Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. In AutoML Conference 2023 PMLR.
Benjamins, C., Raponi, E., Jankovic, A., Doerr, C., & Lindauer, M. (Accepted/In press). Towards Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. In GECCO '23: Proceedings of the Genetic and Evolutionary Computation Conference Companion Association for Computing Machinery Special Interest Group on Genetic and Evolutionary Computation (SIGEVO).
Denkena, B., Dittrich, M-A., Noske, H., Lange, D., Benjamins, C., & Lindauer, M. (2023). Application of machine learning for fleet-based condition monitoring of ball screw drives in machine tools. The international journal of advanced
manufacturing technology.
Mohan, A., Benjamins, C., Wienecke, K., Dockhorn, A., & Lindauer, M. (Accepted/In press). AutoRL Hyperparameter Landscapes. In Second International Conference on Automated Machine Learning PMLR.
2022
Benjamins, C., Raponi, E., Jankovic, A., Blom, K. V. D., Santoni, M. L., Lindauer, M., & Doerr, C. (2022). PI is back! Switching Acquisition Functions in Bayesian Optimization. In 2022 NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems
Benjamins, C., Jankovic, A., Raponi, E., Blom, K. V. D., Lindauer, M., & Doerr, C. (2022). Towards Automated Design of Bayesian Optimization via Exploratory Landscape Analysis. In 6th Workshop on Meta-Learning at NeurIPS 2022
Lindauer, M., Eggensperger, K., Feurer, M., Biedenkapp, A., Deng, D., Benjamins, C., Sass, R., & Hutter, F. (2022). SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization. Journal of Machine Learning Research.
Schubert, F., Benjamins, C., Döhler, S., Rosenhahn, B., & Lindauer, M. (2022). POLTER: Policy Trajectory Ensemble Regularization for Unsupervised Reinforcement Learning.
2021
Benjamins, C., Eimer, T., Schubert, F., Biedenkapp, A., Rosenhahn, B., Hutter, F., & Lindauer, M. (2021). CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning. In Workshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021
Eimer, T., Benjamins, C., & Lindauer, M. T. (2021). Hyperparameters in Contextual RL are Highly Situational. In International Workshop on Ecological Theory of RL (at NeurIPS)