Skip to content

[MRG+2] FIX: force consistency outputs of DummyClassifier's predict_proba when strategy is stratified#13266

Merged
glemaitre merged 3 commits intoscikit-learn:masterfrom
chkoar:fix_dummy_classifier_probas_dtype
Feb 26, 2019
Merged

[MRG+2] FIX: force consistency outputs of DummyClassifier's predict_proba when strategy is stratified#13266
glemaitre merged 3 commits intoscikit-learn:masterfrom
chkoar:fix_dummy_classifier_probas_dtype

Conversation

@chkoar
Copy link
Copy Markdown
Contributor

@chkoar chkoar commented Feb 25, 2019

What does this implement/fix? Explain your changes.

The DummyClassifier's predict_proba method returns a float64 array in all strategies except the stratified strategy which returns a int32 array.

Copy link
Copy Markdown
Member

@glemaitre glemaitre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Could you add an entry to the what's new since it is impacting the end-user.

model.fit(X, y)
probas = model.predict_proba(X)

assert probas.dtype == np.float
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assert probas.dtype == np.float
assert probas.dtype == np.float64

y = [0, 2, 1, 1]
X = [[0]] * 4
model = DummyClassifier(strategy=strategy, random_state=0, constant=0)
model.fit(X, y)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
model.fit(X, y)
probas = model.fit(X, y).predict_proba(X)

])
def test_dtype_of_classifier_probas(strategy):
y = [0, 2, 1, 1]
X = [[0]] * 4
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
X = [[0]] * 4
X = np.zeros(4)

assert_array_equal(predictions1, predictions2)


@pytest.mark.parametrize("strategy", [
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
@pytest.mark.parametrize("strategy", [
@pytest.mark.parametrize(
"strategy", ["stratified", "most_frequent", "prior", "uniform", "constant"]
)

If it fits on the 80 characters it would be more compact

:user:`William de Vazelhes<wdevazelhes>`.

:mod:`sklearn.dummy`
....................................
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
....................................
....................

@glemaitre
Copy link
Copy Markdown
Member

LGTM apart of long line in whats new

Copy link
Copy Markdown
Member

@amueller amueller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@agramfort agramfort changed the title Fix dtype of DummyClassifier's predict_proba when strategy is stratified [MRG+2] Fix dtype of DummyClassifier's predict_proba when strategy is stratified Feb 26, 2019
@glemaitre glemaitre changed the title [MRG+2] Fix dtype of DummyClassifier's predict_proba when strategy is stratified [MRG+2] FIX: force consistency outputs of DummyClassifier's predict_proba when strategy is stratified Feb 26, 2019
@glemaitre glemaitre merged commit eb011b8 into scikit-learn:master Feb 26, 2019
@glemaitre
Copy link
Copy Markdown
Member

glemaitre commented Feb 26, 2019 via email

xhluca pushed a commit to xhluca/scikit-learn that referenced this pull request Apr 28, 2019
xhluca pushed a commit to xhluca/scikit-learn that referenced this pull request Apr 28, 2019
xhluca pushed a commit to xhluca/scikit-learn that referenced this pull request Apr 28, 2019
koenvandevelde pushed a commit to koenvandevelde/scikit-learn that referenced this pull request Jul 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants