Skip to content

Add support for custom metrics #606

@desilinguist

Description

@desilinguist

A good long term solution is to add support for custom metrics just like we have for custom learners. This means we won't need to keep adding new metrics and users can use any metric they want as long as it takes two vectors as input: the labels and the predictions.

The machinery for implementing this would be as follows:

  1. We create a new (private) mutable set called _CUSTOM_METRICS in skll/metrics.py.

  2. We also add a new function in the same file called register_custom_metric() that takes in a path to a Python file containing a function that takes two arraylikes as arguments and returns a number. Whenever this function is called, it imports the metric function into the namespace, adds a new entry to the SCORERS dictionary in skll/__init__.py, and adds the metric name to the _CUSTOM_METRICS set.

  3. We add a new configuration field called custom_metric_path that accepts a path to a Python file, just like custom_learner_path.

  4. In experiments.py, we validate this path and call register_custom_metric() to "register" any custom metrics specified under objectives or metrics to SCORERS.

  5. We modify get_acceptable_classificiation_metrics() and get_acceptable_regression_metrics() to return not just the built-in acceptable metrics but also any custom metrics. It's the user's responsibility to make sure that they are specifying the right custom metric for the right problem type.

  6. For the API, users can call register_custom_metric() directly for each metric and then use the metric names in learner.train() etc. directly.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions