>>> import torch.nn as nn
>>> l = nn.Linear(1,2)
>>> l.generator
>>> l.generator = None
>>> l.generator
>>> l.generator = nn.Linear(3,4) <--------------------line 254 of Module.py
>>> l.generator
>>> vars(l)
{'_backend': <torch.nn.backends.thnn.THNNFunctionBackend object at 0x7fe07e7edcc0>, '_parameters': OrderedDict([('weight', Parameter containing:
0.8059
-0.6782
[torch.FloatTensor of size 2x1]
), ('bias', Parameter containing:
0.6400
-0.7639
[torch.FloatTensor of size 2]
)]), '_buffers': OrderedDict(), '_backward_hooks': OrderedDict(), '_forward_hooks': OrderedDict(), '_modules': OrderedDict([('generator', Linear (3 -> 4))]), 'training': True, 'in_features': 1, 'out_features': 2, 'generator': None}
Right now, this fails silently. The setattr should at least return a key error like add_module would, though I would prefer being able to overwrite None with modules later on.
Right now, this fails silently. The setattr should at least return a key error like add_module would, though I would prefer being able to overwrite None with modules later on.