[MRG+2] MAINT Removing more deprecated stuff for v0.18#5528
[MRG+2] MAINT Removing more deprecated stuff for v0.18#5528jnothman merged 4 commits intoscikit-learn:masterfrom
Conversation
c5f857a to
a305c50
Compare
a305c50 to
f6bc5ca
Compare
f6bc5ca to
c8ccad9
Compare
f95df96 to
656f1d3
Compare
|
Please review @amueller @ogrisel @MechCoder @TomDLT |
|
|
||
| @property | ||
| @deprecated("Attribute ``X_`` is deprecated in version 0.18 and will be" | ||
| " removed in version 0.20.") |
There was a problem hiding this comment.
Woooops. That is a bad blunder.
1882cee to
7474eff
Compare
7474eff to
ee6f4fa
Compare
|
Could someone confirm if the deletions/movements in the metrics' tests are correct? @larsmans @jnothman @agramfort ? |
|
@lesteve would you be able to take a look? |
| "slower even when n_samples > n_features. Hence " | ||
| "it will be removed in 0.18.", | ||
| DeprecationWarning, stacklevel=2) | ||
| raise ValueError("Precompute='auto' is not supported for " |
There was a problem hiding this comment.
precompute='auto' is still mentioned in the ElasticNet docstring above. You can probably remove it.
There was a problem hiding this comment.
Indeed the docstring of the class needs to be updated.
I think the error message should just be the following:
if (isinstance(self.precompute, six.string_types):
raise TypeError('Expected precompute=True or precompute=False, got precompute=%r'
% precompute)| DeprecationWarning, stacklevel=2) | ||
| if isinstance(self.precompute, six.string_types): | ||
| raise ValueError('precompute should be one of True, False or' | ||
| ' array-like. Got %s' % self.precompute) |
There was a problem hiding this comment.
%r is slightly preferrable in this case because it will quote strings.
cecd6e3 to
c02d004
Compare
| "jaccard_similarity_score", "unnormalized_jaccard_similarity_score", | ||
| "zero_one_loss", "unnormalized_zero_one_loss", | ||
|
|
||
| "precision_score", "recall_score", "f1_score", "f2_score", "f0.5_score", |
There was a problem hiding this comment.
Sorry for the misunderstanding but why have you removed this line ?
There was a problem hiding this comment.
The precision_score is initiated with the default 'binary' averaging. Previously this was converted to 'weighted' for non-binary targets. Now that we are removing this behavior, this metric is not compatible with multilabel data.
So I'm removing it from this list to prevent it from getting tested with multilabel data.
|
I also have to edit this - http://scikit-learn.org/dev/modules/model_evaluation.html#from-binary-to-multiclass-and-multilabel |
|
@TomDLT I'm retaining your +1 anyway ;) |
|
LGTM too. |
| raise ValueError('The average is chosen as binary, but the ' | ||
| 'pos_label parameter is None') | ||
|
|
||
| if y_type == 'binary' and pos_label is not None and average is not None: |
There was a problem hiding this comment.
I think what you want here is simply if y_type == 'binary' and average == 'binary' (i.e an else to the above ValueError case)
|
@raghavrv, can we please split the non-P/R/F stuff out of this PR, so that it can be merged without discussion, then fix the P/R/F stuff with focus? |
c02d004 to
2ece9bd
Compare
|
Done! review please ;) |
|
|
||
| # Precompute = 'auto' is not supported for ElasticNet | ||
| assert_raises_regex(ValueError, ".*should be.*True.*False.*array-like.*" | ||
| "Got 'auto'", ElasticNet(precompute='auto').fit, X, y) |
There was a problem hiding this comment.
Why not in the iterated list above?
There was a problem hiding this comment.
The error messages differ a bit. The previous case it should say that 'auto' is one of the valid options, in the current case it is not accepted anymore...
|
otherwise, LGTM |
|
Thanks @raghavrv. Valuable work towards a release. |
|
Thanks for the review and merge! |
* MNT Removing more deprecated stuff for v0.18 * MNT+TST Remove support for loss='l2' * MNT Remove support for precompute='auto' for ElasticNet * FIX/TST precompute=auto should raise a generic message
Fully fixes #5434
fixed_vocabulary_ElasticNetloss='l2'inl1_min_c