Forschung
Projekte

Projekte

  • Fair Benchmarking for Dynamic Algorithm Configuration
    Dynamic Algorithm Configuration (DAC) aims at dynamically adjusting the hyperparameters of a target algorithm to improve performance in a data-driven manner [Biedenkapp et al., 2020]. This allows to not only statically tune hyperparameters, but adjust them while the learning takes place. In contrast to previous methods, this allows us to deeply go into the algorithms and open up new potentials to further improvements of performance. Theoretical and empirical results demonstrated the advantages of dynamically controlling hyperparameters, e.g. in the domains of deep learning, evolutionary algorithms and AI planning [Daniel et al., 2016; Vermetten et al., 2019; Doerr & Doerr, 2020; Shala et al. 2020; Speck et al., 2021]. However, there are several remaining challenges and opportunities to improve DAC w.r.t. building trust into DAC by considering the various aspects of trustworthy AI (transparency, explainability, fairness, robustness, and privacy) as defined in the EC Guidelines for Trustworthy AI and the Assessment List for Trustworthy AI. These include: (i) Typically AI models are overfitted to the selected data instances for training, which means that their selected hyperparameters can not generalize the learned knowledge on new data instances that were not involved in the training process. (ii) There is an inherent bias presented in the learning process that originates from the quality of data used in the training process. (iii) There are no quantitative measures to estimate the level of trust/confidence when using a developed AI model on new unseen data, which will indicate to which level generalization of the learned knowledge is possible. To overcome the aforementioned challenges, the goals of our project are: 1. To investigate different meta-feature representations for data instances for DAC learning tasks presented in the DACBench library, which will allow us to perform complementary analysis between them and perform a landscape coverage analysis of the problem/feature space, leading to a thorough and fair comparison of DAC methods. The utility of meta-feature representations will also be investigated by transforming them with matrix factorization and deep learning techniques. 2. To develop methodologies that will allow us automatically to select more representative data instances that will uniformly cover the landscape space of the data instances using their meta-feature representation, which can be used for benchmarking studies to produce reproducible and transferable results. 3. To define quantitative indicators that will measure the level of diversity between the data instances used for training DAC and instances used for testing by using their meta-feature representation. This will provide researchers the level of trust of applying the selected hyperparameters on new unseen data instances. 4. To determine computational time and energy variables that will be measured to estimate the greener level (i.e. encouraging a reduction in resources spent) of performing DAC experiments only on the selected representative data instances.
    Leitung: Prof. Dr. Marius Lindauer
    Jahr: 2023
    Förderung: DAAD
    Laufzeit: 2023-2024
  • GreenAutoML4FAS - Automated Green-ML for Driver Assistance Systems
    Leitung: Prof. Dr. Marius Lindauer
    Team: AutoML
    Jahr: 2023
    Förderung: BMUV
    Laufzeit: 2023 - 2026
  • ERC Starting Grant: Interactive and Explainable Human-Centered AutoML
    Trust and interactivity are key factors in the future development and use of automated machine learning (AutoML), supporting developers and researchers in determining powerful task-specific machine learning pipelines, including pre-processing, predictive algorithm, their hyperparameters and--if applicable--the architecture design of deep neural networks. Although AutoML is ready for its prime time after it achieved impressive results in several machine learning (ML) applications and its efficiency improved by several orders of magnitudes in recent years, democratization of machine learning via AutoML is still not achieved. In contrast to previously purely automation-centered approaches, ixAutoML is designed with human users at its heart in several stages. First of all, the foundation of trustful use of AutoML will be based on explanations of its results and processes. Therefore, we aim for: (i) Explaining static effects of design decisions in ML pipelines optimized by state-of-the-art AutoML systems. (ii) Explaining dynamic AutoML policies for temporal aspects of dynamically adapted hyperparameters while ML models are trained. These explanations will be the base for allowing interactions, bringing the best of two worlds together: human intuition and generalization capabilities for complex systems, and efficiency of systematic optimization approaches for AutoML. Concretely, we aim for: (iii) Enabling interactions between humans and AutoML by taking human's latent knowledge into account and learning when to interact. (iv) Building first ixAutoML prototypes and showing its efficiency in the context of Industry 4.0. Perfectly aligned with the EU's AI strategy and recent efforts on interpretability in the ML community, we strongly believe that this timely human-centered ixAutoML will have a substantial impact on the democratization of machine learning.
    Leitung: Prof. Dr. Marius Lindauer
    Team: AutoML
    Jahr: 2022
    Förderung: EU
    Laufzeit: 2022-2027
  • KISSKI: AI Service Center
    The central approach for the KISSKI project is the research on AI methods and their provision with the goal of enabling a highly available AI service center for critical and sensitive infrastructures with a focus on the fields of medicine and energy. Due to their relevance to society as a whole, medicine and the energy industry are among the future fields of application-oriented AI research in Germany. Beyond the technological developments, artificial intelligence (AI) has the potential to make a significant contribution to social progress. This is particularly true in areas where digitization processes are increasingly gaining ground and complexity is high. For both medicine and the energy industry, the pressure to innovate, but also the potential, is immense due to the availability of more and more distributed information based on a multitude of new sensors and actuators. The increasing complexity of the tasks as well as the availability of very large data sets offer a high potential for the application of AI methods in both topics.
    Leitung: Prof. Dr. Marius Lindauer
    Team: AutoML
    Jahr: 2022
    Förderung: BMBF
    Laufzeit: 2022-2025
  • ArgSchool: Computational Support for Learning Argumentative Writing in Digital School Education
    In this project, we aim to study how to support German school students in learning to write argumentative texts through computational methods that provide developmental feedback. These methods will assess and explain which aspects of a text are good, which need to be improved, and how to improve them, adapted to the student’s learning stage. We seek to provide answers to three main research questions: (1) How to robustly mine the structure of German argumentative learner texts? (2) How to effectively assess the learning stage of a student based on a given argumentative text? (3) How to provide developmental feedback to an argumentative text adapted to the learning stage? The motivation behind this DFG-funded project is that digital technology is more and more transforming our culture and forms of learning. While vigorous efforts are made to implement digital technologies in school education, software for teaching German is so far limited to simple multiple-choice tests and the like, not providing any formative, let alone individualized, feedback. Argumentative writing is one the most standard tasks in school education, taught incrementally at different ages. Due to its importance across school subjects, it defines a suitable starting point for more “intelligent” computational learning support. We focus on the structural composition of argumentative texts, leaving their content and its relation to underlying sources to future work.
    Leitung: Prof. Dr. Henning Wachsmuth
    Team: NLP
    Jahr: 2021
    Förderung: DFG
    Laufzeit: 2021-2024
  • CoyPu: Cognitive Economy Intelligence Platform for the Resillience of Economic Ecosystems
    Natural disasters, pandemics, financial- & political crises, supply shortages or demand shocks propagate through hidden and intermediate linkages across the global economic system. This is a consequence of the continuous international division of business and labor which is at the heart of globalisation. The aim of the project is to provide a platform that expounds the complex supply chains and reveal the linkages, compounded risks and provide companies with predictions regarding their exposure in various granularities.
    Leitung: Prof. Marius Lindauer and Prof. Maria Esther-Vidal (L3S/LUH)
    Team: InfAI, DATEV eg., eccenca GmbH, Implisense GmbH, Deutsches Institut für Wirtschaftsforschung, Leibniz Informationszentrum Technik und Naturwissenschaften, Hamburger Informatik Technologie-Center e.V., Selbstregulierung Informationswirtschaft e.V., Infineo
    Jahr: 2021
    Förderung: Innovationswettbewerb Künstliche Intelligenz (BMWK)
    Laufzeit: 2021-2024
  • Towards a Framework for Assessing Explanation Quality
    We take part with two subprojects in the transregional Collaborative Research Center TRR 318 "Constructing Explainability". In Subproject INF, we study the pragmatic goal of all explaining processes: to be successful — that is, for the explanation to achieve the intended form of understanding (enabling, comprehension) of the given explanandum on the explainee's side. In particular, we aim to investigate the question as to what characteristics successful explaining processes share in general, and what is specific to a given context or setting. To this end, we will first establish and define a common vocabulary of the different elements of an explaining process. We will then explore what quality dimensions can be assessed for explaining processes. Modeling these processes based on the elements represented in the vocabulary, we will develop and evaluate new computational methods that analyze the content, style, and structure of explanations in terms of linguistic features, interaction aspects, and available context parameters. Our goal is to establish and empirically underpin a first theory of explanation quality based on the vocabulary, thereby laying a common ground for the whole TRR to understand how success in explaining processes is achieved. This is a challenge in light of our assumptions that any explanation is dynamic and co-constructed and that the quality and success of explanations and explaining processes may be seen differently from different viewpoints.
    Leitung: Prof. Dr. Henning Wachsmuth
    Team: NLP
    Jahr: 2021
    Förderung: DFG
    Laufzeit: 2021-2025
  • Metaphors as an Explanation Tool
    We take part with two subprojects in the transregional Collaborative Research Center TRR 318 "Constructing Explainability". In Subproject C04, we study how explainers and explainees focus attention, through their choice of metaphors, on some aspects of the explanandum and draw attention away from others. In particular, this project focuses on the metaphorical space established by different metaphors for one and the same concept. We seek to understand how metaphors foster (and impede) understanding through highlighting and hiding. Moreover, we aim to establish knowledge about when and how metaphors are used and adapted in explanatory dialogues; as well as how explainee, explainer, and the topical domain of the explanandum contribute to this process. By providing an understanding of how metaphorical explanations function and of how metaphor use responds to and changes contextual factors, we will contribute to the development of co-constructive explaining AI systems.
    Leitung: Prof. Dr. Henning Wachsmuth
    Team: NLP
    Jahr: 2021
    Förderung: DFG
    Laufzeit: 2021-2025
  • OASiS: Objective Argument Summarization in Search
    Conceptually, an argument logically combines a claim with a set of reasons. In real-world text, however, arguments may be spread over several sentences, often intertwine multiple claims and reasons along with context information and rhetorical devices, and are inherently subjective. This project aims to study how to computationally obtain an objective summary of the gist of an argumentative text. In particular, we aim to establish foundations of natural language processing methods that (1) analyze the gist of an argument's reasoning, (2) generate a text snippet that summarizes the gist concisely, and (3) neutralize potential subjective bias in the summary as far as possible. The rationale of the DFG-funded project is that argumentation machines, as envisioned by the RATIO priority program (SPP 1999), are meant to present the different positions people may have towards controversial issues, such as abortion or social distancing. One prototypical machine is our argument search engine, args.me, which opposes pro and con arguments from the web in response to user queries, in order to support self-determined opinion formation. A key aspect of args.meand comparable machines is to generate argument snippets, which give the user an efficient overview of the usually manifold arguments. Standard snippet generation has turned out to be insufficient for this purpose. We hypothesize that the best argument snippet summarizes the argument's gist objectively.
    Leitung: Prof. Dr. Henning Wachsmuth
    Team: NLP
    Jahr: 2021
    Förderung: DFG
    Laufzeit: 2021-2024
  • Leibniz AI Academy
    The Leibniz AI Academy aims to develop and establish a trans-curricular and interdisciplinary micro-degree program at the Leibniz Universität Hannover (LUH), in which students from different courses of study acquire competencies in the field of Artificial Intelligence
    Leitung: Prof. Dr. Marius Lindauer, Prof. Dr. Ralph Ewert, Prof. Dr. Johannes Krugel
    Jahr: 2021
    Förderung: Bundesministerium für Bildung und Forschung (BMBF)
    Laufzeit: 2021 - 2024
    Logo of Leibniz AI academy Logo of Leibniz AI academy
  • Dynamic Algorithm Configuration
    Da die Konfigurationen während der Laufzeit in Abhängigkeit vom aktuellen Zustand des Algorithmus ausgewählt werden sollten, kann es als ein Problem des Reinforcement Learning (RL) betrachtet werden, bei dem ein Agent in jedem Zeitschritt die zu verwendende Konfiguration auf der Grundlage der Leistung im letzten Schritt und des aktuellen Zustands des Algorithmus auswählt. Dies ermöglicht uns einerseits den Einsatz leistungsfähiger RL-Methoden, andererseits bringt RL auch eine Reihe von Herausforderungen mit sich, wie Instabilität, Rauschen und Ineffizienz bei der Abtastung, die bei Anwendungen wie DAC angegangen werden müssen. Daher umfasst die Forschung zu DAC auch die Forschung zu zuverlässigem, interpretierbarem, allgemeinem und schnellem Reinforcement Learning.
    Leitung: Prof. Dr. Marius Lindauer
    Jahr: 2019
    Förderung: DFG
    Laufzeit: 2019-2023