Publication Details

Practitioner Motives to Select Hyperparameter Optimization Methods

authored by
Niklas Hasebrook, Felix Morsbach, Niclas Kannengießer, Marc Zöller, Jörg Franke, Marius Lindauer, Frank Hutter, Ali Sunyaev
Abstract

Advanced programmatic hyperparameter optimization (HPO) methods, such as Bayesian optimization, have high sample efficiency in reproducibly finding optimal hyperparameter values of machine learning (ML) models. Yet, ML practitioners often apply less sample-efficient HPO methods, such as grid search, which often results in under-optimized ML models. As a reason for this behavior, we suspect practitioners choose HPO methods based on individual motives, consisting of contextual factors and individual goals. However, practitioners' motives still need to be clarified, hindering the evaluation of HPO methods for achieving specific goals and the user-centered development of HPO tools. To understand practitioners' motives for using specific HPO methods, we used a mixed-methods approach involving 20 semi-structured interviews and a survey study with 71 ML experts to gather evidence of the external validity of the interview results. By presenting six main goals (e.g., improving model understanding) and 14 contextual factors affecting practitioners' selection of HPO methods (e.g., available computer resources), our study explains why practitioners use HPO methods that seem inappropriate at first glance. This study lays a foundation for designing user-centered and context-adaptive HPO tools and, thus, linking social and technical research on HPO.

Organisation(s)
Institute of Artificial Intelligence
External Organisation(s)
Karlsruhe Institute of Technology (KIT)
USU Software AG
University of Freiburg
Type
Preprint
Publication date
03.03.2022
Publication status
E-pub ahead of print
Electronic version(s)
https://doi.org/10.48550/arXiv.2203.01717 (Access: Open)