[MRG] accelerate plot_randomized_search.py example #21598#21637
Merged
TomDLT merged 1 commit intoscikit-learn:mainfrom Nov 16, 2021
Merged
[MRG] accelerate plot_randomized_search.py example #21598#21637TomDLT merged 1 commit intoscikit-learn:mainfrom
TomDLT merged 1 commit intoscikit-learn:mainfrom
Conversation
41 tasks
adrinjalali
approved these changes
Nov 15, 2021
Member
|
Thanks @sply88 ! |
glemaitre
pushed a commit
to glemaitre/scikit-learn
that referenced
this pull request
Nov 22, 2021
glemaitre
pushed a commit
to glemaitre/scikit-learn
that referenced
this pull request
Nov 29, 2021
samronsin
pushed a commit
to samronsin/scikit-learn
that referenced
this pull request
Nov 30, 2021
glemaitre
pushed a commit
to glemaitre/scikit-learn
that referenced
this pull request
Dec 24, 2021
glemaitre
pushed a commit
that referenced
this pull request
Dec 25, 2021
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Speeds up ../examples/model_selection/plot_randomized_search.py (Issue #21598) by
Example now runs in sub 7s, previously 3-4 min.
Output now:
randomizedSearchCV took 1.20 seconds for 15 candidates parameter settings.
Model with rank: 1
Mean validation score: 0.985 (std: 0.017)
Parameters: {'alpha': 0.42727143483962227, 'average': False, 'l1_ratio': 0.21100288847068727}
Model with rank: 2
Mean validation score: 0.985 (std: 0.015)
Parameters: {'alpha': 0.01923065446247222, 'average': False, 'l1_ratio': 0.4570654700132295}
Model with rank: 3
Mean validation score: 0.985 (std: 0.013)
Parameters: {'alpha': 0.18482179083229622, 'average': False, 'l1_ratio': 0.9627445435714047}
GridSearchCV took 4.71 seconds for 60 candidate parameter settings.
Model with rank: 1
Mean validation score: 0.993 (std: 0.004)
Parameters: {'alpha': 1.0, 'average': False, 'l1_ratio': 0.0}
Model with rank: 2
Mean validation score: 0.989 (std: 0.014)
Parameters: {'alpha': 0.01, 'average': False, 'l1_ratio': 0.0}
Model with rank: 2
Mean validation score: 0.989 (std: 0.014)
Parameters: {'alpha': 0.01, 'average': False, 'l1_ratio': 0.2222222222222222}
Previous output:
RandomizedSearchCV took 37.23 seconds for 20 candidates parameter settings.
Model with rank: 1
Mean validation score: 0.920 (std: 0.028)
Parameters: {'alpha': 0.07316411520495676, 'average': False, 'l1_ratio': 0.29007760721044407}
Model with rank: 2
Mean validation score: 0.920 (std: 0.029)
Parameters: {'alpha': 0.0005223493320259539, 'average': True, 'l1_ratio': 0.7936977033574206}
Model with rank: 3
Mean validation score: 0.918 (std: 0.031)
Parameters: {'alpha': 0.00025790124268693137, 'average': True, 'l1_ratio': 0.5699649107012649}
GridSearchCV took 187.33 seconds for 100 candidate parameter settings.
Model with rank: 1
Mean validation score: 0.931 (std: 0.026)
Parameters: {'alpha': 0.0001, 'average': True, 'l1_ratio': 0.0}
Model with rank: 2
Mean validation score: 0.928 (std: 0.030)
Parameters: {'alpha': 0.0001, 'average': True, 'l1_ratio': 0.1111111111111111}
Model with rank: 3
Mean validation score: 0.927 (std: 0.026)
Parameters: {'alpha': 0.0001, 'average': True, 'l1_ratio': 0.5555555555555556}