MASIF: Meta-learned Algorithm Selection using Implicit Fidelity Information
- authored by
- Tim Ruhkopf, Aditya Mohan, Difan Deng, Alexander Tornede, Frank Hutter, Marius Lindauer
- Abstract
Selecting a well-performing algorithm for a given task or dataset can be time-consuming and
tedious, but is crucial for the successful day-to-day business of developing new AI & ML
applications. Algorithm Selection (AS) mitigates this through a meta-model leveraging
meta-information about previous tasks. However, most of the available AS methods are
error-prone because they characterize a task by either cheap-to-compute properties of the
dataset or evaluations of cheap proxy algorithms, called landmarks. In this work, we extend
the classical AS data setup to include multi-fidelity information and empirically demonstrate
how meta-learning on algorithms’ learning behaviour allows us to exploit cheap test-time
evidence effectively and combat myopia significantly. We further postulate a budget-regret
trade-off w.r.t. the selection process. Our new selector MASIF is able to jointly interpret
online evidence on a task in form of varying-length learning curves without any parametric
assumption by leveraging a transformer-based encoder. This opens up new possibilities for
guided rapid prototyping in data science on cheaply observed partial learning curves.- Organisation(s)
-
Institute of Information Processing
Machine Learning Section
- External Organisation(s)
-
University of Freiburg
- Type
- Preprint
- Publication date
- 02.12.2022
- Publication status
- Published