Skip to content

Commit a9d6b23

Browse files
committed
Update on "[6/N][BE] Remove Sharded Linear Op for ShardedTensor"
[ghstack-poisoned]
1 parent 6d3805f commit a9d6b23

1 file changed

Lines changed: 0 additions & 1 deletion

File tree

  • torch/distributed/_shard/sharded_tensor/_ops

torch/distributed/_shard/sharded_tensor/_ops/__init__.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,6 @@
99
from .init import kaiming_uniform_, normal_, uniform_, constant_
1010

1111
# Import all ChunkShardingSpec ops
12-
from torch.distributed._shard.sharding_spec.chunk_sharding_spec_ops.linear import sharded_linear
1312
from torch.distributed._shard.sharding_spec.chunk_sharding_spec_ops.embedding import sharded_embedding
1413
from torch.distributed._shard.sharding_spec.chunk_sharding_spec_ops.embedding_bag import sharded_embedding_bag
1514
from torch.distributed._shard.sharding_spec.chunk_sharding_spec_ops.softmax import sharded_softmax

0 commit comments

Comments
 (0)