Hyperparameter tuning is the black art of automatically finding a good combination of control parameters for a data miner. While widely applied in empirical Software Engineering, there has not been much discussion on which hyperparameter tuner is best for software analytics.To address this gap in the literature, this paper applied a range of hyperparameter optimizers (grid search, random search, differential evolution, and Bayesian optimization) to a defect prediction problem. Surprisingly, no hyperparameter optimizer was observed to be ``best'' and, for one of the two evaluation measures studied here (F-measure), hyperparameter optimization, in 50% of cases, was no better than using default configurations.
We conclude that hyperparameter optimization is more nuanced than previously believed. While such optimization can certainly lead to large improvements in the performance of classifiers used in software analytics, it remains to be seen which specific optimizers should be applied to a new dataset.
Mon 5 NovDisplayed time zone: Guadalajara, Mexico City, Monterrey change
11:15 - 12:00 | |||
11:15 22mTalk | Facilitating Feasibility Analysis: The Pilot Defects Prediction Dataset Maker SWAN Davide Falessi California Polytechnic State University, Max Jason Moede California Polytechnic State University, USA | ||
11:37 22mTalk | Is One Hyperparameter Optimizer Enough? SWAN |