Write full tensors out at once in HF consolidation script#159394
Write full tensors out at once in HF consolidation script#159394ankitageorge wants to merge 6 commits intogh/ankitageorge/16/basefrom
Conversation
Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159394
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit 48753cb with merge base eed9dbf ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D78684452 |
Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/) cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta [ghstack-poisoned]
|
This pull request was exported from Phabricator. Differential Revision: D78684452 |
Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/) cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta [ghstack-poisoned]
|
This pull request was exported from Phabricator. Differential Revision: D78684452 |
Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/) cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta [ghstack-poisoned]
|
This pull request was exported from Phabricator. Differential Revision: D78684452 |
Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/) cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta [ghstack-poisoned]
Pull Request resolved: #159394 Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. ghstack-source-id: 300234774 @exported-using-ghexport Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/)
|
This pull request was exported from Phabricator. Differential Revision: D78684452 |
| - Optimized chunks for other patterns | ||
|
|
||
| Args: | ||
| full_tensor_mv: Buffer to write the full tensor to |
There was a problem hiding this comment.
Can the buffered approach increase the chances of OOMs compared to the earlier file stream write?
There was a problem hiding this comment.
ya it probably can have OOMs if you have many parallel threads at the same time, but I don't think a single tensor would be more than 10GB (as a guess), so then in that case any more than 8 threads could cause OOMs. I think it would be up to users to manage this. But there isn't really an alternative for the remote storage case since many of them don't support random offset writes
| @@ -329,13 +341,12 @@ def _write_data( | |||
|
|
|||
|
|
|||
| def _write_sub_tensor_to_file_optimized( | |||
There was a problem hiding this comment.
It may be good to add a test for this method.
Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/) cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta [ghstack-poisoned]
Pull Request resolved: #159394 Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. ghstack-source-id: 302555412 @exported-using-ghexport Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/)
|
This pull request was exported from Phabricator. Differential Revision: D78684452 |
|
@pytorchmergebot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/) Pull Request resolved: #159394 Approved by: https://github.com/saumishr ghstack dependencies: #159392, #159393
Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/) Pull Request resolved: #159394 Approved by: https://github.com/saumishr ghstack dependencies: #159392, #159393
…9394) Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/) Pull Request resolved: pytorch#159394 Approved by: https://github.com/saumishr ghstack dependencies: pytorch#159392, pytorch#159393
…9394) Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/) Pull Request resolved: pytorch#159394 Approved by: https://github.com/saumishr ghstack dependencies: pytorch#159392, pytorch#159393
Pull Request resolved: pytorch/pytorch#159394 Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller. ghstack-source-id: 300120717 @exported-using-ghexport Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/)
Stack from ghstack (oldest at bottom):
Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller.
Differential Revision: D78684452
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta