Uniformly handling of metric types for all loggers#2021
Uniformly handling of metric types for all loggers#2021vfdev-5 merged 9 commits intopytorch:masterfrom
Conversation
|
@Muktan thanks for the quick PR !
I think the point of the issue #1965 is to change to previous behaviour and make it uniform for all loggers. Also, how about tqdm logger, I think we have to rework _OutputHandler as well : ignite/ignite/contrib/handlers/tqdm_logger.py Lines 275 to 286 in be2bff8 |
|
@vfdev-5 thanks for reply!
For example for wandblogger it would be like: rendered_metrics = {}
if self.tag is not None:
for name, value in metrics.items():
if isinstance(value, numbers.Number) or isinstance(value, torch.Tensor) and value.ndimension() == 0:
rendered_metrics[f"{self.tag}/{name}"] = value
elif isinstance(value, torch.Tensor) and value.ndimension() == 1:
for i, v in enumerate(value):
rendered_metrics[f"{self.tag}/{name}/{i}"] = v
else:
warnings.warn(f"WandBLogger output_handler can not log metrics value type {type(value)}")
logger.log(rendered_metrics, step=global_step, sync=self.sync)But the problem with that would occur when we run tests for the file. we have a test case (
Yes, for tqdm logger we have to rework _OutputHandler. I thought we have to make sure there is instance checking done. But it seems we need to follow the code structure of handling of metric types as in neptune, for all other loggers, correct? |
That's a good point. I think logging a metric as string is not a common usage. Yes, I think updating the test cases is a solution.
We would like to have more or less similar implementation for all loggers. Yes, Neptune logger is an example and should behave as Tensorboard, Visdom and some others... |
Ok, so I would update the test case.
Okay, I will try to make a similar code structure for all the other loggers. Also, does the above wandb implementation look correct? |
It looks like correct. I was also exploring mlflow logger implementation and explicit call Let me list all implementations and let's decide:
Actually, I think polyaxon and mlflow and tqdm do the data type dispatch in the correct way: Let's implement this logic to all OutputHandlers. What do you think ? |
|
Yes, |
|
@Muktan could you please fix mypy issues ? |
…into pytorch-ignite-1965
|
@vfdev-5 shall I add the test for missing lines as indicated by codecov? |
|
Thanks for the update @Muktan ! Looks good, let's what CI and codecov says. |
Fixes #1965
Description:
Checked all the output handler class but only wandb_logger had changes all other loggers check the instance types.
Most loggers added
elif isinstance(value, torch.Tensor) and value.ndimension() == 1:condition and inside condition iterated over each item in value and logged it. But I have not added this for wandb_logger because it may change the initial logging behaviour.Check list: