Conversation
…learn into ranking_metrics
|
Interesting pr! |
There was a problem hiding this comment.
you could also add (to better check the truncation):
assert_equal(mean_ndcg_score([2, 3, 5], [0, 1, 0], k=2), 1.0)|
So the handling of ties is implementation specific (depends on the initial ordering of the target score and the tie handling of the I would rather implement the pessimistic tie handling as described in #2580 (comment), at least as an option. |
|
Also it would be great to add a |
|
@ogrisel Handling of ties and edge cases is on my todo list. |
There was a problem hiding this comment.
to follow numpy convention we agreed during last sprint to write
array, shape (n_samples,)
|
I have some scoring functions working (NDCG@k and DCG@k) with |
|
what happened to this? Also @davidgasquez I guess go ahead? |
|
Hey @amueller! Not 100% sure if I should make a PR with the current implementation. As you can see, NDCG@K requires |
|
Using a LabelBinarizer is fine, but I'm not sure if you mean to be handling On 11 October 2016 at 22:24, David Gasquez notifications@github.com wrote:
|
Thanks! I'll read the contributing guidelines and try to send a PR in the next few days! |
|
Started working on it with #7739. Any feedback is appreciated! 😄 |
|
Are we interested on implementing Indeed, I would be happy to implement these metrics. |
|
Summary: from scipy.stats import kendalltau, spearmanr
from sklearn.metrics import make_scorer
kenall_tau_score = make_scorer(kendalltau)
spearman_rho_score = make_scorer(kendalltau)This leaves us with I'm content with that and would therefore close. Any objections? Remark: We could add this in an example, for instance. |
An early PR to make reading the diff easier. Not ready for detailed comments but high-level comments welcome :)