Skip to content

Hyperparameter optimization benchmarking #643

@TomAugspurger

Description

@TomAugspurger

It'd be nice to have some benchmarks for how our different hyperparameter optimizers perform. There are a few comparisons that would be useful

  1. dask_ml's drop-in replacements for GridSearchCV & RandomizedSearchCV. We're able to deconstruct Pipeline objects to avoid redundant fit calls. This benchmark would compare a GridSearchCV(Pipeline(...)) for dask_ml.model_selection.GridSearchCV and sklearn.model_selection.GridSearchCV. We'd expect Dask-ML's to perform better the more CV splits there are and the more parameters that are explored early on in the pipeline (Example of task-graph speed-up? #141 has some discussion).
  2. Scaling of Dask's joblib backend for large problems. Internally, scikit-learn uses joblib for parallel for loops. With
with joblib.parallel_backend("dask"):
    ...

The items in the for loop are executed on the Dask Cluster. There are some issues with the backend (joblib/joblib#1020, joblib/joblib#1025). Fixing those aren't in scope for this work, but we'd like to have benchmarks to understand the current performance and measure the speedup from fixing those.
3. General performance on large datasets with Incremental, Hyperband, etc. We can't really compare to scikit-learn here, since it doesn't handle larger-than-memory datasets. @stsievert may have some thoughts / benchmarks to share here.

cc @dankerrigan. This is more than enough work I think. If you're able to make progress on any of these (or other things you think are important) it'd be great.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions