🐛 Bug
Currently, init._calculate_fan_in_and_fan_out computes the receptive field by directly indexing the weight tensor instead of just using the shape:
|
receptive_field_size = tensor[0][0].numel() |
This means that, when extending PyTorch's tensor class, e.g. for lazy access, explicitly indexing the tensor will force a call to tensor() and reconstruct the full tensor/explicitly access the elements.
Since the init sub-module doesn't check for torch_function, it is not possibly to override the init functions. Simply using the shape allows to avoid that.
cc @albanD @mruberry @jbschlosser
🐛 Bug
Currently,
init._calculate_fan_in_and_fan_outcomputes the receptive field by directly indexing the weight tensor instead of just using the shape:pytorch/torch/nn/init.py
Line 277 in 1588df6
This means that, when extending PyTorch's tensor class, e.g. for lazy access, explicitly indexing the tensor will force a call to
tensor()and reconstruct the full tensor/explicitly access the elements.Since the init sub-module doesn't check for torch_function, it is not possibly to override the init functions. Simply using the shape allows to avoid that.
cc @albanD @mruberry @jbschlosser