Theses & Projects AutoML

Theses & Projects

We are continuously looking for students to work on student research projects and theses. Here, the topics cover the entire spectrum of research areas within the projects currently being dealt with by the staff. Ideas for own topics and concrete tasks are also welcome (if they go hand in hand with our research interests).

Due to the constantly changing topics and tasks proposed by the supervisors, only a few selected topics are mentioned here. 

Thesis Topic

We are looking for students who are interested in working on research in AutoML. Depending on their interests and goals, we select a topic best suited to them. In their thesis work, they will be focusing on state-of-the-art research within AI. The following topics are examples of how such a thesis might look.

Sample topics

  • Auto-PyTorch for X [BSc + MSc]

    Extending, applying and refining our AutoDL tool Auto-PyTorch for new domains like Image Classification, Segmentation, Video, Outlier Detection or NLP. We recommend a strong background in Machine Learning and your chosen application for this thesis.

    Contact: Difan Deng

  • Creating a new DAC benchmark [BSc + MSc]

    Modelling, implementing and evaluating DAC for any target algorithm. We recommend a strong background in RL, basic knowledge of DAC as well as of the target domain of your choice to be able to succeed in this topic. Possible target domains include, Machine Learning or Reinforcement Learning, MIPS or SAT solvers and Evolutionary Algorithms.

    Contact: Theresa Eimer

  • Meta-Policy Gradients in contextual RL [MSc]

    As hyperparameter tuning in the contextual RL setting has proven hard, using Meta-Policy gradients in the contextual setting for adapting all the hyperparameters in a single lifetime could be an alternative to established solutions. One way to start here would be to extend the work done on self-Tuning Actor-Critic (STAC) to the contextual setting by conditioning the state and action embeddings on a learnable context parameter (like the standard-deviation of the context generating distribution) and then train an agent to learn this set of parameters while interacting with the environment using Meta-Policy gradients. We recommend a strong background in Reinforcement Learning as well as prior knowledge of AutoML for this thesis.

    Contact: Aditya Mohan

  • Multi-fidelity as Meta Learning Problem [MSc]

    Using F-PACOH on multiple fidelities could be a way to transfer the inductive bias from one fidelity to another and allow to evaluate on different fidelities during the estimation process. Combining this with multi-information sources approaches (predecessors to multi-fidelity) could help in also selecting relevant fidelities. Scheduling & evaluation of the validitiy are of the essence here. We recommend a strong background in AutoML for this thesis.

    Contact: Tim Ruhkopf

  • Hyperparameter Importance for AutoML [MSc]

    Tuning of hyperparameters in machine learning is essential to achieve top performance. However, some hyperparameters are more important than others. An AutoML process could thus benefit from integrating hyperparameter importance methods into the optimization process, such that important hyperparameters are changed more frequently. We recommend prior knowledge of interpretable ML and AutoML for this thesis.

    Contact: Sarah Krebs

  • Interactive AutoML [MSc]

    Most AutoML tools allow none to few possibilites for an interaction with a user. As a consequence, users often only realize at the very end of an AutoML run that the result is undesirable for one or several reasons, although this possibly could have been detected much earlier in the search process. Correspondingly, the trust of end-users into AutoML tools is often rather low. The aim of this thesis is to extend AutoML tools in various ways to improve possibilities for interaction with a user during the search.

    Contact: Alexander Tornede

Interested in working with us?

In order to get a better undstanding of how we supervise and grade thesis, refer to this page.

To do a thesis with you, we need the following information from you (though not every item may apply to every topic):

  1. Topic or areas of interest
  2. What prior knowledge do you have? Which relevant courses did you take?
  3. A rating for yourself in these areas (from -- to ++):
    • Coding in Python
    • PyTorch
    • Ability to implement a Deep Learning paper
    • Ability to implement a Reinforcement Learning paper 
    • Ability to understand and execute someone else's codebase 

If you are generally interested in writing a thesis with us, but have not yet decided for any of the topics above, please send an email with the requested information given above to

If you are interested in writing a thesis about one of the specific topics given above, please send an email directly to the contact person listed below the corresponding topic. The address can be found on that person's personal page.