[MRG] Multilabel-indicator roc auc and average precision#2460
[MRG] Multilabel-indicator roc auc and average precision#2460ogrisel merged 35 commits intoscikit-learn:masterfrom
Conversation
doc/modules/model_evaluation.rst
Outdated
There was a problem hiding this comment.
binary decisions value => binary decision values?
|
Thanks @ogrisel for reviewing !!! |
There was a problem hiding this comment.
The message was wrong but there is still a zero devision error (undefined metrics problem) if there is only one sample or if the y_true is constant or one element of y_true is exactly equal to its mean isn't it?
There was a problem hiding this comment.
This case is already treated in the function. If the denominator is zero and the numerator is zero, then the score is 1. If the denominator is zero and the numerator is non-zero, then the score is 0.
This makes r2_score behave as the explained_variance_score.
There was a problem hiding this comment.
Good, I could not see it from the diff view in github.
I just had a quick look. I don't have time to review it deeper now. Could you please put a png rendering of the new plots in the PR description? |
Which new plot? There is not new plot at the moment. |
|
I thought the ROC example was updated to demonstrate averaging. I think it should :) |
|
Could you please add a couple of tests for the various averaging case on very tiny (minimalist) inline-defined multi-label datasets that could be checked by computing the expected output manually? |
Good point ! |
|
@ogrisel I have update the example about roc curves and precision-recall curves. Here are the generated plot: |
I have added some tests on toy data for multilabel-indicator data. |
examples/plot_precision_recall.py
Outdated
|
Thanks @glouppe !!! |
|
Just rebased on top of master ! |
|
I had a look through quick - all seems well to me. Nice work. |
|
@jaquesgrobler Thanks for reviewing !!! |
|
Could you please run a test coverage report and paste the relevant lines here? (and also add more tests if this report highlight uncovered options / blocks / exceptions...) :) |
|
Current code coverage All missing lines in |
|
Now I have 100% coverage for code related to this pr. |
|
Rebased on top of master. I will update the what's new when it's merged. |
|
Merging! |
[MRG] Multilabel-indicator roc auc and average precision
|
Thanks, I am working at fixing the jenkins build. |
|
I think I fixed the python 3 issue. No idea about the numpy 1.3.1 issue. |




The goal of this pr is to add multilabel-indicator support with various types of averaging
for
roc_auc_scoreandaverage_precision_score.Still to do:
roc_auc_scoreaverage_precision_scoreA priori, I won't add ranking-based
average_precision_score.I don't want to add support for the
multilabel-sequenceformat.