-
-
Notifications
You must be signed in to change notification settings - Fork 260
Hyperparameter optimization benchmarking #643
Description
It'd be nice to have some benchmarks for how our different hyperparameter optimizers perform. There are a few comparisons that would be useful
- dask_ml's drop-in replacements for GridSearchCV & RandomizedSearchCV. We're able to deconstruct Pipeline objects to avoid redundant fit calls. This benchmark would compare a
GridSearchCV(Pipeline(...))fordask_ml.model_selection.GridSearchCVandsklearn.model_selection.GridSearchCV. We'd expect Dask-ML's to perform better the more CV splits there are and the more parameters that are explored early on in the pipeline (Example of task-graph speed-up? #141 has some discussion). - Scaling of Dask's joblib backend for large problems. Internally, scikit-learn uses joblib for parallel for loops. With
with joblib.parallel_backend("dask"):
...The items in the for loop are executed on the Dask Cluster. There are some issues with the backend (joblib/joblib#1020, joblib/joblib#1025). Fixing those aren't in scope for this work, but we'd like to have benchmarks to understand the current performance and measure the speedup from fixing those.
3. General performance on large datasets with Incremental, Hyperband, etc. We can't really compare to scikit-learn here, since it doesn't handle larger-than-memory datasets. @stsievert may have some thoughts / benchmarks to share here.
cc @dankerrigan. This is more than enough work I think. If you're able to make progress on any of these (or other things you think are important) it'd be great.