Confusion matrix derived metrics#15522
Conversation
Co-authored-by: samskruthi padigepati <https://github.com/ddhar1> Co-authored-by: Divya Dhar<https://github.com/samskruthireddy>
Co-authored-by: samskruthi padigepati <https://github.com/ddhar1> Co-authored-by: Divya Dhar<https://github.com/samskruthireddy>
Co-authored-by: Divya Dhar <https://github.com/ddhar1> Co-authored-by: samskruthi padigepati <https://github.com/samskruthireddy>
Co-authored-by: samskruthi padigepati <https://github.com/samskruthireddy>
|
This requires adding to test_common.py and specific behaviours tested in test_classification.py |
glemaitre
left a comment
There was a problem hiding this comment.
I am unsure about the naming. I think that we need to use the full name (e.g. true_positive_rate) instead of acronym (e.g. tpr).
| Ground truth (correct) labels for n_samples samples. | ||
|
|
||
| y_pred : array-like of float, shape = (n_samples, n_classes) or (n_samples,) | ||
| y_pred : array-like of float, shape = (n_samples, n_classes) |
There was a problem hiding this comment.
| y_pred : array-like of float, shape = (n_samples, n_classes) | |
| y_pred : array-like of float, shape = (n_samples, n_classes) \ |
|
|
||
| y_pred : array-like of float, shape = (n_samples, n_classes) or (n_samples,) | ||
| y_pred : array-like of float, shape = (n_samples, n_classes) | ||
| or (n_samples,) |
There was a problem hiding this comment.
| or (n_samples,) | |
| or (n_samples,) |
haochunchang
left a comment
There was a problem hiding this comment.
Good work! @samskruthiReddy
Is this PR work in progress?
I would like to work on this if this PR is stalled.
| tpr : float (if average is not None) or array of float, shape =\ | ||
| [n_unique_labels] | ||
|
|
||
| fpr : float (if average is not None) or array of float, , shape =\ |
There was a problem hiding this comment.
Just a small extra comma :)
| fpr : float (if average is not None) or array of float, , shape =\ | |
| fpr : float (if average is not None) or array of float, shape =\ |
| Examples | ||
| -------- | ||
| >>> import numpy as np | ||
| >>> from sklearn.metrics import precision_recall_fscore_support |
There was a problem hiding this comment.
Is this a dependency of this function?
Since the following examples did not use this function.
| >>> from sklearn.metrics import precision_recall_fscore_support |
| if average == 'weighted': | ||
| weights = pos_sum | ||
| if weights.sum() == 0: | ||
| zero_division_value = 0.0 if zero_division in ["warn", 0] else 1.0 | ||
| # precision is zero_division if there are no positive predictions | ||
| # recall is zero_division if there are no positive labels | ||
| # fscore is zero_division if all labels AND predictions are | ||
| # negative | ||
| return (zero_division_value if pred_sum.sum() == 0 else 0, | ||
| zero_division_value, | ||
| zero_division_value if pred_sum.sum() == 0 else 0) |
There was a problem hiding this comment.
Here seems to only return 3 values.
| alters 'macro' to account for label imbalance; it can result in an | ||
| F-score that is not between precision and recall. |
There was a problem hiding this comment.
I guess this function does not return F-score.
| alters 'macro' to account for label imbalance; it can result in an | |
| F-score that is not between precision and recall. | |
| alters 'macro' to account for label imbalance. |
| If ``pos_label is None`` and in binary classification, this function | ||
| returns the average precision, recall and F-measure if ``average`` | ||
| is one of ``'micro'``, ``'macro'``, ``'weighted'`` or ``'samples'``. |
There was a problem hiding this comment.
| If ``pos_label is None`` and in binary classification, this function | |
| returns the average precision, recall and F-measure if ``average`` | |
| is one of ``'micro'``, ``'macro'``, ``'weighted'`` or ``'samples'``. | |
| If ``pos_label is None`` and in binary classification, this function | |
| returns the true positive rate, false positive rate, true negative rate | |
| and false negative rate if ``average`` is one of ``'micro'``, ``'macro'``, | |
| ``'weighted'`` or ``'samples'``. |
|
@haochunchang it's been marked as 'stalled' so I think you can take over |
Modify documentation and add deprecation of position arg.
Modify doc and add deprecation to position arg.
Modify doc and add deprecation to position arg.
|
I have opened a new PR #17265 to take over this stalled PR. |
|
@cmarmo, maybe the label "help wanted" should be removed from this PR. Thanks. |
|
closing in favor of #19556 |
Reference Issues/PRs
Adding Fall-out, Miss rate, specificity as metrics #5516
What does this implement/fix? Explain your changes.
Implemented a function which returns fpr, tpr, fnr, tnr.
Any other comments?
Implementation of seperate functions for each metric which calls this function is still pending.
Co-authored by @ddhar1 @samskruthiReddy