Student research projects and theses

We are continuously looking for students to work on student research projects and theses. Here, the topics cover the entire spectrum of research areas within the projects currently being dealt with by the staff. Ideas for own topics and concrete tasks are also welcome if they form synergies with the research areas.

Due to the constantly changing suggestions for topics and tasks by the supervisors, only a few selected topics are mentioned here.

Individual thesis topics

We are looking for students who would like to work on our future research projects. Depending on your skills and future plans, we will work together to find a topic in our research areas that best suits you. In your thesis, you will come into contact with emerging topics with a focus on AI. The following topics are examples of what such work might look like.

Suggested topics

  • Auto-PyTorch for X [BSc + MSc]

    Extend, apply and refine our AutoDL tool Auto-PyTorch for new areas such as outlier detection, maintenance prediction or time series prediction. We recommend a strong background in machine learning (especially Deep Learning) and their chosen application for this work. When applying, please indicate the direction you would like to work in and provide a rough plan of how you would implement AutoPytorch in your target area.

    Contact: Difan Deng

  • Implementation of a new DAC benchmark [BSc + MSc].

    Modelling, implementation and evaluation of DAC for any target algorithm. We recommend a strong background in RL, basic knowledge of DAC, and the target domain of your choice to be successful in this topic. Possible target domains include machine learning or reinforcement learning, MIPS or SAT solvers, and evolutionary algorithms.

    Contact: Theresa Eimer

  • Exploring Behavioral Similarity in Contextual Reinforcement Learning [Bsc + Msc]

    Scaling-up Reinforcement Learning to environments more complicated than games, such as chess, necessitates us to bias our learning methods to tackle many issues with such settings, such as large state and action spaces, complicated dynamics, etc. Behavioral similarity encompasses exploiting conditions that lead to similar policies, i.e., if provided with the same state, they produce identical action distributions. In this thesis, we want to study these methods in the contextual setting, wherein we will try to answer the following questions:

    1. Are similarity methods robust to small environmental changes, such as contextual Reinforcement Learning? 
    2. Can we improve these methods using this additional contextual information using loss augmentations or other procedures? 
    3. Can we meta-learn such metrics across environment distributions? 

    Ideally, for a bachelor thesis, this topic would entail ablating existing methods, while for master theses, we would take it a step further and try to develop a new approach.

    Contact: Aditya Mohan

  • Meta-Policy Gradients in Contextual RL [MSc]

    Meta-Policy gradients (MPGs) aim at learning a set of hyperparameters in a single lifetime. The key idea is to interleave policy iteration with hyperparameter updates, thus, learning an objective (Bellman or policy gradient objective) and then using it to learn a new parameter update.  Recent work has shown that contextual information about the environment (such as information about goals) can enrich meta-gradients.  

    Pitch: Study the impact of contextual information on vanilla and bootstrapped meta-gradients for generalization to similar environments. This work entails two questions:

    1. Do standard MPG techniques work well in contextual setting?
    2. Does incorporating contextual information help with learning better hyperparameter schedules using MPGs in such settings?

    Depending on your interest, we can scope out how to concretely answer these questions. 

  • Augmenting algorithm components in RL through meta-learning [MSc]

    We can generate augmentation functions by meta-learning, something for the policy objective in PPO. However, it is open whether this is generally true for algorithm components in reinforcement learning, whether we could also learn augmentation ensembles, and how well these functions generalise. The goal of this work is to extend existing techniques to new algorithms and components.

    Contact: Theresa Eimer

  • Enhancing Animal Behavior Analysis via HPO in Object Tracking Algorithms – In Collaboration with Zoo Hannover [MSc]

    In collaboration with Zoo Hannover, we aim to enhance object detection and tracking algorithms to monitor the maternal care of Thomson’s gazelles. We integrate Automated Machine Learning (AutoML) and Computer Vision to achieve this, focusing mainly on Hyperparameter Optimization (HPO). The project's primary objective is to improve the analysis of animal behavior using camera data. After identifying leading tracking algorithms, this master's thesis involves an in-depth examination of relevant hyperparameters specific to the corresponding tracking algorithms. The goal of this research is the strategic use of the AutoML tool SMAC to optimize the performance of the tracking algorithms, aiming for enhanced accuracy and efficiency in analyzing animal behavior.

    Kontakt: Leona Hennig


The exact procedure of a thesis, together with a rough idea of what we expect from theses, is described here.

It is important to us that the appropriate background knowledge is available so that a thesis has a chance of a positive conclusion. In order to be able to assess this accordingly, we would ask you to send us the following points:

Proposed topic or topic area(s)
What previous knowledge is available? What ML-related courses have been taken for this?

  • A self-assessment from -- to ++ on the following topics:
  • Coding in Python
  • Coding with PyTorch
  • Ability to implement a Deep Learning paper
  • Ability to implement a reinforcement learning paper
  • Ability to understand and execute a foreign codebase

If you are generally interested in writing a thesis with us but have not decided on any of the above topics, please email with the above information.

If you are interested in a specific topic indicated above, please send an email directly to the contact person indicated in the topic. The email addresses can be found on the personal pages.