Interactive and Explainable Human-Centered AutoML

Making AutoML systems more human-centered by enabling interactivity and explainability.

Diagram showing the interaction between an AutoML loop and a human-centered ixAutoML loop, where users influence the search space and receive explanations, improving transparency. Diagram showing the interaction between an AutoML loop and a human-centered ixAutoML loop, where users influence the search space and receive explanations, improving transparency. Diagram showing the interaction between an AutoML loop and a human-centered ixAutoML loop, where users influence the search space and receive explanations, improving transparency.

Funding Agency

Bild Bild Bild

Trust and interactivity are key factors in the future development and use of automated machine learning (AutoML), supporting developers and researchers in determining powerful task-specific machine learning pipelines, including pre-processing, predictive algorithm, their hyperparameters and -if applicable- the architecture design of deep neural networks. Although AutoML is ready for its prime time, democratization of machine learning via AutoML is still not achieved. In contrast to previously purely automation-centered approaches, ixAutoML is designed with human users at its heart in several stages. The foundation of trustful use of AutoML will be based on explanations of its results and processes. To this end, we aim for: (i) Explaining static effects of design decisions in ML pipelines optimized by state-of-the-art AutoML systems. (ii) Explaining dynamic AutoML policies for temporal aspects of dynamically adapted hyperparameters while ML models are trained. These explanations will be the base for allowing interactions allowing to combine human intuition and generalization capabilities for complex systems, and efficiency of systematic optimization approaches for AutoML.

Lead at LUHAI: Prof. Lindauer

Funding Program:  ERC Starting Grant

Project Period: Dec 2022 - Nov 2027

Publications

First 1

2026


Wever, M. D., Muschalik, M., Fumagalli, F., & Lindauer, M. (Angenommen/im Druck). HyperSHAP: Shapley Values and Interactions for Explaining Hyperparameter Optimization. In Proceedings of the Fortieth AAAI Conference on Artificial Intelligence (AAAI 2026)

2025


Bischl, B., Casalicchio, G., Das, T., Feurer, M., Fischer, S., Gijsbers, P., Mukherjee, S., Müller, A. C., Németh, L., Oala, L., Purucker, L., Ravi, S., van Rijn, J. N., Singh, P., Vanschoren, J., van der Velde, J., & Wever, M. (2025). OpenML: Insights from 10 years and more than a thousand papers. Patterns, 6(7), Artikel 101317. https://doi.org/10.1016/j.patter.2025.101317
Fehring, L., Wever, M., Spliethöver, M., Hennig, L., Wachsmuth, H., & Lindauer, M. (2025). Towards Dynamic Priors in Bayesian Optimization for Hyperparameter Optimization. In Workshop Track of the AutoML Conference https://openreview.net/pdf?id=mQ0IENZRx2
Graf, H., Fehring, L., Tornede, T., Tornede, A., Wever, M. D., & Lindauer, M. (2025). Towards Exploiting Early Termination for Multi-Fidelity Hyperparameter Optimization. In Workshop Track of the AutoML Conference Vorabveröffentlichung online. https://openreview.net/pdf?id=apxqygZeFV
Hasebrook, N., Morsbach, F., Kannengießer, N., Zöller, M., Franke, J., Lindauer, M., Hutter, F., & Sunyaev, A. (2025). Practitioner Motives to Use Different Hyperparameter Optimization Methods. ACM Transactions on Computer-Human Interaction, 32(6), Artikel 59. https://doi.org/10.1145/3745771, https://doi.org/10.48550/arXiv.2203.01717
Margraf, V., Lappe, A., Wever, M. D., Benjamins, C., Hüllermeier, E., & Lindauer, M. (2025). SynthACticBench: A Capability-Based Synthetic Benchmark for Algorithm Configuration. In GECCO 2025 - Proceedings of the 2025 Genetic and Evolutionary Computation Conference (ACM Conferences). Association for Computing Machinery (ACM). Vorabveröffentlichung online.
Segel, S., Graf, H., Bergman, E., Thieme, K., Wever, M. D., Tornede, A., Hutter, F., & Lindauer, M. (Angenommen/im Druck). DeepCAVE: A Visualization and Analysis Tool for Automated Machine Learning. Journal of Machine Learning Research, 2025(26). http://jmlr.org/papers/v26/24-1353.html

2024


Giovanelli, J., Tornede, A., Tornede, T., & Lindauer, M. (2024). Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning. In M. Wooldridge, J. Dy, & S. Natarajan (Hrsg.), Proceedings of the 38th conference on AAAI (S. 12172-12180). (Proceedings of the AAAI Conference on Artificial Intelligence; Band 38, Nr. 11). https://doi.org/10.48550/arXiv.2309.03581, https://doi.org/10.1609/aaai.v38i11.29106

2023


Mohan, A., Benjamins, C., Wienecke, K., Dockhorn, A., & Lindauer, M. (2023). Extended Abstract: AutoRL Hyperparameter Landscapes. Abstract von European Workshop on Reinforcement Learning 2023, Brüssel. https://openreview.net/forum?id=4Zu0l5lBgc
Segel, S., Graf, H., Tornede, A., Bischl, B., & Lindauer, M. (2023). Symbolic Explanations for Hyperparameter Optimization. In AutoML Conference 2023 PMLR. Vorabveröffentlichung online. https://openreview.net/forum?id=JQwAc91sg_x

2022


Hvarfner, C., Stoll, D., Souza, A. L. F., Lindauer, M., Hutter, F., & Nardi, L. (2022). π BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization. In Proceedings of the International conference on Learning Representation (ICLR) https://doi.org/10.48550/arXiv.2204.11051
Mallik, N., Hvarfner, C., Stoll, D., Janowski, M., Bergman, E., Lindauer, M. T., Nardi, L., & Hutter, F. (2022). PriorBand: HyperBand + Human Expert Knowledge. In 2022 NeurIPS Workshop on Meta Learning (MetaLearn) https://openreview.net/forum?id=ds21dwfBBH
Moosbauer, J., Casalicchio, G., Lindauer, M., & Bischl, B. (2022). Enhancing Explainability of Hyperparameter Optimization via Bayesian Algorithm Execution. Vorabveröffentlichung online. https://doi.org/10.48550/arXiv.2206.05447