[MRG+1] Adds multilabels permutation tests to metrics/test_common#12803
Conversation
sklearn/metrics/tests/test_common.py
Outdated
| score = metric(y_true, y_score) | ||
|
|
||
| for perm in permutations(range(n_classes), n_classes): | ||
| inv_perm = np.zeros(n_classes, dtype=int) |
There was a problem hiding this comment.
Awesome! Any reason you first do the inv_perm instead of directly using the permutations? Would it change the test at all if you're going through all permutations?
There was a problem hiding this comment.
Good point! This PR was updated to use the permutations directly.
adrinjalali
left a comment
There was a problem hiding this comment.
LGTM, thanks @thomasjpfan!
sklearn/metrics/tests/test_common.py
Outdated
|
|
||
|
|
||
| @pytest.mark.parametrize( | ||
| 'name', MULTILABELS_METRICS - NOT_SYMMETRIC_METRICS) |
There was a problem hiding this comment.
It does not need it. This was a little too restrictive.
This PR was updated to remove the "unnormalized_multilabel_confusion_matrix" from this test. It returns a matrix as a metric, which makes the result dependent on the order of the labels.
sklearn/metrics/tests/test_common.py
Outdated
| y_true = random_state.randint(0, 2, size=(n_samples, n_classes)) | ||
| y_score = random_state.normal(size=y_true.shape) | ||
|
|
||
| # Makes sure all samples have multiple classes. This works around errors |
There was a problem hiding this comment.
"multiple classes" -> at least one label
?
|
Otherwise LGTM. |
|
thanks for all your great work, @thomasjpfan! |
Reference Issues/PRs
Fixes #12309 for multilabel and multioutput
What does this implement/fix? Explain your changes.
Adds test to check that multilabel and multioutput metrics are invariant under label permutations.