Skip to content

ENH add CAP curve#28972

Open
JosephBARBIERDARNAL wants to merge 124 commits intoscikit-learn:mainfrom
JosephBARBIERDARNAL:cap-curve
Open

ENH add CAP curve#28972
JosephBARBIERDARNAL wants to merge 124 commits intoscikit-learn:mainfrom
JosephBARBIERDARNAL:cap-curve

Conversation

@JosephBARBIERDARNAL
Copy link
Copy Markdown
Contributor

@JosephBARBIERDARNAL JosephBARBIERDARNAL commented May 7, 2024

Reference Issue

Fixes #10003.
Supersedes #15176. (edit by @lorentzenchr)

What does this implement/fix?

creation of a CumulativeAccuracyDisplay class for plots

"The CAP of a model represents the cumulative number of positive outcomes along the y-axis versus the corresponding cumulative number of a classifying parameter along the x-axis. The output is called a CAP curve.[1] The CAP is distinct from the receiver operating characteristic (ROC) curve, which plots the true-positive rate against the false-positive rate." (wikipedia definition)

It's mainly inspired from the RocCurveDisplay class.

other

It's currently a work in progress.

TODO

Binary classification

  • raise a ValueError in from_estimator if the estimator is not fitted or is a classifier that was fitted with more than 3 classes;
  • fix pos_label handling when the positive class;
  • add/update a test to check that we have the same result for response_method="decision_function" and response_method="predict_proba" for a LogisticRegression classifier fit with string labels and for all 3 possible values of pos_label;
    • not that this is different from what is already tested in test_display_from_estimator_and_from_prediction;
    • CAP curves should be invariant under order preserving transformations of the predictions. This is way plotting CAP from the unormalized logits or the logistic sigmoid scaled probabilistic predictions should result in the same curve: the logistic sigmoid is strictly monotonic, hence order preserving.
      • joseph --> should be good
  • add an option (enabled by default) to draw CAP curve of the "perfect"/"oracle" model;
  • update the tests to check that the model curves always lie between the "chance level" and "perfect"/"oracle" curves.
  • add a test to check that the display array attributes y_true_cumulative and cumulative_total have the same dtype as y_pred in the test about from_predictions. We can test for y_pred passed either as np.float32 or np.float64.
    • joseph: should be good
  • test that CAPCurveDisplay.from_estimator(LinearSVC().fit(X, y), ...) works (even if it does not have a predict_proba method. This should cover one of the line reported as uncovered by codecov.
  • leverage test_common_curve_display.py to reuse some generic tests on CAPCurveDisplay and maybe remove redundant tests on invalid inputs from test_cap_curve_display.py if any;
    • joseph: should be good
  • add despine argument?
    • olivier: I would rather not do that specifically for that PR but maybe consider a cross-display PR that does that for all *Display classes in scikit-learn. Feel free to open an issue to discuss this with screen shots e.g. on ROC or PR curves and your analysis of pros and cons.
    • joseph: sure. I suggested it here because the ROC and PR curves already have it (see ENH despine keyword for ROC and PR curves #26367). I'm not sure it makes much sense for ConfusionMatrixDisplay (?). I'll open an issue (when this PR will be merged) for CAPCurveDisplay, PredictionErrorDisplay and DetCurveDisplay because I think they're the only ones that don't have this option.
    • should be good --> I added the argument, as per suggested below in the discussion

Regression

  • update the docstrings to make it explicit that either regressors (with positive outcomes) or binary classifiers are accepted and anything else is rejected.
    • joseph: I made a first pass, but I guess it can be improved
  • raise a ValueError with an informative error message if y_true has negative values;
    • joseph: it is now tested here
  • raise a ValueError if all y_true are zeros (the plot would be degenerate and would raise a low level divide by zero warning whith normalize_scale=True);
    • In theory, this should not happen, since if all the y_true are zeros, it will be considered a case of classification
  • add tests with continuous outputs on positive observation data (e.g. using PoissonRegressor) and check that the regressor curve lie between the "chance level" and "perfect" curves;
    • joseph: it is now tested here
  • update the insurance model examples (examples/linear_model/plot_tweedie_regression_insurance_claims.py and examples/linear_model/plot_poisson_regression_non_normal_loss.py) to use the CAPCurveDisplay class instead of manually plotting the Lorenz curves.

Other

  • document the new feature with a new entry under doc/whats_new/upcoming_changes/
  • update the user guide (doc/visualization) to reference this new tool.

Nice to have

@github-actions
Copy link
Copy Markdown

github-actions bot commented May 7, 2024

✔️ Linting Passed

All linting checks passed. Your pull request is in excellent shape! ☀️

Generated for commit: 788dc82. Link to the linter CI: here

@lorentzenchr lorentzenchr changed the title add cap curve main class and tests ENH add CAP curve May 11, 2024
@lorentzenchr
Copy link
Copy Markdown
Member

@JosephBARBIERDARNAL Could you please address all the comments from #28752?

@JosephBARBIERDARNAL
Copy link
Copy Markdown
Contributor Author

Yes, I'm ready to do it, but I won't be able to get back to it quickly (2 to 3 months) unfortunately. Hope that's okay

@JosephBARBIERDARNAL
Copy link
Copy Markdown
Contributor Author

JosephBARBIERDARNAL commented Oct 6, 2024

Working on this PR again. Here are the current updates:

Some (likely non-exhaustive) issues:

  • Should the figure be squared when normalize=False? Currently, it does self.ax_.set([...], aspect="equal") in both cases, so the figure dimensions are "random" (i.e., unpredictable)
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression

from sklearn.metrics import CapCurveDisplay

X, y = make_classification(n_samples=1000, n_features=20, n_classes=2, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = LogisticRegression(max_iter=1000)
clf.fit(X_train, y_train)
y_scores = clf.decision_function(X_test)

fig, ax = plt.subplots(ncols=2, dpi=300, figsize=(12,12))

display = CapCurveDisplay.from_predictions(
    ax=ax[0],
    y_true=y_test,
    y_pred=y_scores,
    name='normalize_scale=False',
    normalize_scale=False,
    plot_chance_level=True
)

display = CapCurveDisplay.from_predictions(
    ax=ax[1],
    y_true=y_test,
    y_pred=y_scores,
    name='normalize_scale=True',
    normalize_scale=True,
    plot_chance_level=True
)

Screenshot 2024-10-06 at 14 49 39

@glemaitre glemaitre self-requested a review November 3, 2024 10:24
@glemaitre
Copy link
Copy Markdown
Member

I'll come back on this PR after the release. This will be one of my priority to be merged for 1.7.

@glemaitre glemaitre added this to the 1.7 milestone Nov 3, 2024
@JosephBARBIERDARNAL
Copy link
Copy Markdown
Contributor Author

I'll come back on this PR after the release. This will be one of my priority to be merged for 1.7.

Ok cool. There's still a few things I need to do anyway before a review (cov test and adding the #30023 check).


# compute cumulative sums for true positives and all cases
y_true_cumulative = np.cumsum(y_true_sorted * sample_weight_sorted)
cumulative_total = np.cumsum(sample_weight_sorted)
Copy link
Copy Markdown
Member

@ogrisel ogrisel Nov 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For information, there was a concurrent PR to fix the lack of cumsum of the sample_weight to define the x-axis of the Lorenz curves in one of our regression examples. To check that this was the correct fix, we ran a quick check on synthetic data with integer valued sample weights to check that there is an exact equivalence between repeated data points and reweighting them by exposure:

#30198 (comment)

Maybe this PR could be expanded to test for this property also holds for CapCurveDisplay.from_predictions with non-constant, integer-valued sample_weight.

EDIT: I am not entirely sure how to write such a test, but possibly we could numpy.interp1d the CAP curve computed on the repeated for the x-axis location of the points of the weighted CAP curve and check that the two curves match up to a small eps at those locations.

Copy link
Copy Markdown
Member

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are several review comments in #28752 that have not yet been addressed.

I would like to make sure that we do not forget about them when iterating on this PR.

@JosephBARBIERDARNAL to sort things out, please reply in the threads of the review of #28752 and make it explicit which comments have been addressed in #28972 and how, and then mark those threads as resolved.

Also, we need some test and update an existing example, where we compare ROC, DET and CAP curves on the same classifier. I suppose this example is the best candidate:

https://scikit-learn.org/stable/auto_examples/model_selection/plot_det.html

@JosephBARBIERDARNAL
Copy link
Copy Markdown
Contributor Author

JosephBARBIERDARNAL commented Nov 27, 2024

@ogrisel That sounds good to me. I won't be able to work on it immediately, but I’ll definitely be able to get to it within the next few weeks.

I'll ping you and/or @glemaitre for review

@lorentzenchr
Copy link
Copy Markdown
Member

2 things are important for me:

  • Even if this PR focuses on binary classification, it should be implemented in a way such that (positive) regression can be added easily, in particular
    • without API hassle (note that it is predict_proba for classification but predict for regression)
    • with the same class/function
  • The plot should contain the lines for "perfect" and "random" as in the screenshot from the paper in FEA add CumulativeAccuracyDisplay #28752 (comment).

@JosephBARBIERDARNAL
Copy link
Copy Markdown
Contributor Author

  • The plot should contain the lines for "perfect" and "random" as in the screenshot from the paper in
  • Will this actually be visible? I'm not entirely sure how broad the possible shape of such a line could be. In my mind, it would resemble the "perfect" line in ROC curves, which suggests that this line would almost always span the spines of the Matplotlib Axes (except perhaps in cases with very low sample sizes?). However, we could adjust the x/y axis limits to accommodate this scenario.
  • If we decide to implement it, what should we name it, and what should its default style be?

@JosephBARBIERDARNAL
Copy link
Copy Markdown
Contributor Author

JosephBARBIERDARNAL commented Jan 11, 2025

@JosephBARBIERDARNAL to sort things out, please reply in the threads of the review of #28752 and make it explicit which comments have been addressed in #28972 and how, and then mark those threads as resolved.

I resolved most of the conversations there. I didn't touch some of them if I wasn't sure. Feel free to ping me if any changes are needed.

I'm just not sure what "make it explicit which comments have been addressed in https://github.com/scikit-learn/scikit-learn/pull/28972 and how" means

@JosephBARBIERDARNAL
Copy link
Copy Markdown
Contributor Author

JosephBARBIERDARNAL commented May 7, 2025

A few things that I think need to be addressed:

  • One thing I find strange is that the CAP curve is the only one where all the letters of the acronym are in capitals. There's DetCurveDisplay, RocCurveDisplay and then CAPCurveDisplay, which seems inconsistent.
  • I'm not entirely sure I understand what is intended for chance_level naming in scikit-learn in general. I've seen a few threads mentioning avoiding using it or renaming it. If that's the case, wouldn't now be a good time to not use that name in this display?

@ogrisel
Copy link
Copy Markdown
Member

ogrisel commented May 13, 2025

One thing I find strange is that the CAP curve is the only one where all the letters of the acronym are in capitals. There's DetCurveDisplay, RocCurveDisplay and then CAPCurveDisplay, which seems inconsistent.

I think we should rename to DETCurveDisplay and ROCCurveDisplay in a follow-up PR with a backward compat aliases:

class ROCCurveDisplay(...):
    ...


# Backward compat alias to keep code implemented for scikit-learn 1.7 and earlier working.
RocCurveDisplay = ROCCurveDisplay

We could introduce a deprecation warning, but I am worried that this will break educational resources (e.g. code in blog posts, tutorials, books) for little benefit, so I would rather go with "soft" deprecation: rename and keep a long term backward compat alias without warnings.

But let's not do that as part of this PR and rather do the renaming/deprecation in a follow-up instead.

@ogrisel
Copy link
Copy Markdown
Member

ogrisel commented May 13, 2025

I'm not entirely sure I understand what is intended for chance_level naming in scikit-learn in general. I've seen a few threads mentioning avoiding using it or renaming it. If that's the case, wouldn't now be a good time to not use that name in this display?

It's already part of the API, so we have to stick with it for the API. My earlier comment is not to put it in section headers or documentation paragraphs without explanations. I prefer introducing the concept with more precise terms such as "The expected curve of a non-informative or constant classifier whose predictions do not depend on the input features".

Note that for ROC and DET, the expected curve of the non-informative classifier in the infinite sample limit (what we name "chance level" in the API) matches exactly the curve of the constant predictor obtained on a finite size test set. For CAP it's more subtle: the "chance level" line only approximately matches the curve of the constant predictor obtained on a finite size test set. The two would only match in the large sample size limit. I think this is an interesting point and I would like to highlight in this section of the example.

@JosephBARBIERDARNAL
Copy link
Copy Markdown
Contributor Author

I'm a bit short of time at the moment, but to sum up the TODO:

  • Process this comment. I think this requires one or more decisions, as well as corrections (x/y labels, plot, ...). It's not obvious to me why it would be better for the Lorenz curve to have the same order as the Cap curve by default, but I feel like I'm missing something and I think I need some time to look into it more closely.
  • Make sure the examples are fully updated and clear
  • Add a test to check the sample weight/repetition equivalence property ENH add CAP curve #28972 (comment)

Feel free to add any other missing elements.

@jeremiedbb jeremiedbb modified the milestones: 1.7, 1.8 Jul 1, 2025
Copy link
Copy Markdown
Member

@lucyleeow lucyleeow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very thorough PR, thanks @JosephBARBIERDARNAL

A very high level review, mostly nitpicks.

a :class:`~sklearn.metrics.CAPCurveDisplay`. All parameters are
stored as attributes.

Read more in the :ref:`User Guide <visualizations>`.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've decided to add a reference to both the visualizations.rst page and the page in the user guide that talks about the curve, see #31238


name : str, default=None
Name of CAP curve for labeling. If `None`, name will be set to
`"Classifier"` or `"Regressor"`.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know we do this elsewhere, but I don't think these default names add much and I'd rather not have them. WDYT @glemaitre ?

Comment on lines +52 to +56
chance_level_ : matplotlib Artist
Curve of the independent classifier.

perfect_level_ : matplotlib Artist
Curve of the perfect classifier.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these lines only relevant for classification, and not for regression..?

@lucyleeow
Copy link
Copy Markdown
Member

Updated with main just so the CI's are run again

Copy link
Copy Markdown
Member

@lucyleeow lucyleeow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your patience @JosephBARBIERDARNAL ! Please let me know if you need any assistance.

This will get the CI green at least.

The other CI failure is because the parameters in the docstring do not match the order in the method param. I think this is the case for all 3 plot, from_estimator and from_predictions.

Comment on lines +522 to +523
ls="--",
color="k",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ls="--",
color="k",
curve_kwargs={"ls": "--", "color": "k"},

I think this is why doc build is failing

Comment on lines +682 to +683
ls="--",
color="k",
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ls="--",
color="k",
curve_kwargs={"ls": "--", "color": "k"},

@JosephBARBIERDARNAL
Copy link
Copy Markdown
Contributor Author

@lucyleeow thanks! I haven't forgotten about that PR, but I've been a little busier than expected. I think there's still a lot of work to be done here, so I hope to get back to it soon!

@lesteve
Copy link
Copy Markdown
Member

lesteve commented Nov 12, 2025

I happened to show someone how to solve conflicts in git and this PR was the first one I found with conflicts. Now that the work is done, I may as well push into your branch.

I think there are more CI issues but I will leave this exercise for the author 😉.

@adrinjalali adrinjalali removed this from the 1.8 milestone Nov 12, 2025
@lucyleeow lucyleeow moved this from In Progress to PR stalled in Visualization and displays Feb 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: PR stalled

Development

Successfully merging this pull request may close these issues.

Add sklearn.metrics.cumulative_gain_curve and sklearn.metrics.lift_curve

10 participants