-
Notifications
You must be signed in to change notification settings - Fork 68
Description
A good long term solution is to add support for custom metrics just like we have for custom learners. This means we won't need to keep adding new metrics and users can use any metric they want as long as it takes two vectors as input: the labels and the predictions.
The machinery for implementing this would be as follows:
-
We create a new (private) mutable set called
_CUSTOM_METRICSinskll/metrics.py. -
We also add a new function in the same file called
register_custom_metric()that takes in a path to a Python file containing a function that takes two arraylikes as arguments and returns a number. Whenever this function is called, it imports the metric function into the namespace, adds a new entry to theSCORERSdictionary inskll/__init__.py, and adds the metric name to the_CUSTOM_METRICSset. -
We add a new configuration field called
custom_metric_paththat accepts a path to a Python file, just likecustom_learner_path. -
In
experiments.py, we validate this path and callregister_custom_metric()to "register" any custom metrics specified underobjectivesormetricstoSCORERS. -
We modify
get_acceptable_classificiation_metrics()andget_acceptable_regression_metrics()to return not just the built-in acceptable metrics but also any custom metrics. It's the user's responsibility to make sure that they are specifying the right custom metric for the right problem type. -
For the API, users can call
register_custom_metric()directly for each metric and then use the metric names inlearner.train()etc. directly.