make _setup_fill_arg serializable#6730
Conversation
|
@pmeier Looks good. Let's do the same for AA. |
|
Not sure why, but it seems AA seems to work as is: import pickle
import torch
from torchvision.prototype import transforms
image = torch.randint(0, 256, (3, 512, 512), dtype=torch.uint8)
for transform in [
transforms.AutoAugment(),
transforms.AugMix(),
transforms.AutoAugment(),
transforms.RandAugment(),
transforms.TrivialAugmentWide(),
]:
serialized = pickle.dumps(transform)
deserialized = pickle.loads(serialized)
deserialized(image)I suggested we don't make it more complicated until we actually hit a problem. Using the same technique that I used here, is not really possible for AA, since the return value is dependent on the input parameters. |
Isn't the return type of the lambda always |
Yeah, but the return value is different and depends on the inputs. with something like def _auto_augment_shearx(num_bins: int, height: int, width: int) -> torch.Tensor:
return torch.linspace(0.0, 0.3, num_bins)If we would know functools.partial(torch.linspace, 0.0, 0.3, STATIC_NUM_BINS)This is indeed fixed for some of the AA transforms like but Worse, although not used often, know anything about |
Reviewed By: NicolasHug Differential Revision: D40427456 fbshipit-source-id: 5b0e98c73906a2ed2a66045e6608ce2aef09c003
Addresses #6728. The only other usages of
lambdaare happening in the AA transforms:vision/torchvision/prototype/transforms/_auto_augment.py
Line 150 in 6e203b4
This was a perf optimization we did in v2. In v1 we have
vision/torchvision/transforms/autoaugment.py
Line 226 in 6e203b4
Since we want to use AA transforms for videos as well (we do, don't we?), we probably either need to revert this or use a similar technique I used here. I'll look into this next.
As explained in #6728, we don't have unified testing for transforms yet. Thus, I cannot guarantee that these two instances cover every non-serializable things on transforms v2. I will work on testing when video training is up and running.