Merged
Conversation
…mic dimensions. (#5239) * Skip calling as_strided in empty_strided_symint. * only return empty_symint conditionally. * add a comment
* Add XRT nightly builds * remove space
* Add ToString method for both PjrtData and PjrtShardedData * on cpu same config will become replicated, dont't check actual op sharding type
* disable bazel remote cache if gcloud key is empty * remove remote cache from setup.py * experiment with debug msg * fix flag * add more logs * skip remote chache if credential file is empty * add comment * add logs * add check in test and coverage script * fix condition in coverage test * advance branch pr * allow remote cache if gloud file isn't specified explicitly * remove dummy comment
* Clean bazel stuff on distutils clean * Fix python formatting
However the generated StableHLO graph still hardcodes the non-tensor value. this is not correct, will fix later.
…etnly broken by pytorch (#5282)
Bazel should figure out that _XLAC.so is current or not, and trigger rebuild if any cpp files changed.
* Remove or improve several hardcoded TPU test conditions * Fix test condition
* Err if calling sizes() on dynamic tensor * try to set has_symbolic_sizes_strides_ * resolve merge conflict * enable CONTINUE_ON_ERROR * fixed the python test test_SizeEq_should_not_compile_for_identical_symints * fix test_index_types * set CONTINUE_ON_ERROR to true * remove some unwanted code. * add a print * directly set has_symbolic_sizes_strides_ = true * make some fixes. * fix empty_strided_symint * ran linter * change error type in the test. * fix comments * ran linter
…5281) * Fix the error where mark_step does not materalize tensors on SPMD:0 * typo * fix test_non_tensor_scalar
* Set torch._dynamo.config.automatic_dynamic_shapes to False * Enable DynamoInferenceBasicTest.test_simple_model_with_different_input_shape
Summary: This pull request does the following: 1. It hides token for all_gather. 2. It folds the out-of-place all_gather into the regular all_gather. 3. It fixes an issue with the last all_reduce_in_place PR where it forgot to set the token. Test Plan: PJRT_DEVICE=TPU python test/test_mp_all_gather.py
Collaborator
|
Are you sure we should cherry-pick |
Collaborator
Author
|
lol good catch.. let me revert that |
This reverts commit 3967d7b.
Collaborator
|
LGTM changes in /infra/... |
Collaborator
Author
|
error is I think they are being deleted as one of the commit, I can add them back |
This reverts commit e91ad3a.
Collaborator
Author
|
@will-cromar test is green, can you take another look at this pr? After it is approved, I think we should turn on the |
will-cromar
approved these changes
Jul 11, 2023
Collaborator
will-cromar
left a comment
There was a problem hiding this comment.
LGTM. It's up to you how you want to merge it. I'm fine with squashing.
Collaborator
Author
|
I enabled the |
Collaborator
Author
|
rebase and merge still grey, I am just gonna squash. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
two commit I skipped are PJRT/OpenXLA related
The last pr I cherry-picked is avoid copy proto in PrepareOutputShardingPropagation ( at J07/07