Details zu Publikationen

Learning Activation Functions for Sparse Neural Networks

verfasst von
Mohammed Loni, Aditya Mohan, Mehdi Asadi, Marius Lindauer
Abstract

Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in
critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.

Organisationseinheit(en)
Fachgebiet Automatische Bildinterpretation
Fachgebiet Maschinelles Lernen
Institut für Künstliche Intelligenz
Externe Organisation(en)
Hochschule Mälardalen (MDH)
Tarbiat Modarres University
Typ
Aufsatz in Konferenzband
Publikationsdatum
16.05.2023
Publikationsstatus
Angenommen/Im Druck
Peer-reviewed
Ja
Elektronische Version(en)
https://arxiv.org/abs/2305.10964 (Zugang: Offen)