[test] Add error_inputs for nn.Embedding module#174180
[test] Add error_inputs for nn.Embedding module#174180subinz1 wants to merge 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/174180
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit bb6aa8e with merge base 07c056e ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
|
@pytorchbot label "topic: not user facing" |
| samples = [] | ||
|
|
||
| # Out of range indices: index exceeds num_embeddings | ||
| # Only test on CPU - CUDA triggers kernel assertion instead of Python exception |
There was a problem hiding this comment.
cc @eqy I assume there is nothing we can do to make that behavior better on device?
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: Claude Code Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 2 jobs have failed, first few of them are: Claude Code, trunk / linux-jammy-cuda13.0-py3.10-gcc11 / test (default, 4, 5, linux.g6.4xlarge.experimental.nvidia.gpu) Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -r |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
Add `module_error_inputs_torch_nn_Embedding` function to test error messages for invalid inputs to Embedding module: 1. Out of range indices: tests IndexError when index exceeds num_embeddings (CPU only - CUDA triggers kernel assertion instead of Python exception) 2. Float indices: tests RuntimeError when float tensor used instead of Long/Int 3. Negative num_embeddings: tests RuntimeError when constructor has negative dim This enables regression testing for error messages and follows the existing pattern used for other modules (BatchNorm, GroupNorm, RNN cells, etc.). Fixes pytorch#174179
|
Successfully rebased |
31ef347 to
bb6aa8e
Compare
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: Could not find commit that was pushed before comment 4144406938 Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
## Summary Add `module_error_inputs_torch_nn_Embedding` function to test error messages for invalid inputs to `nn.Embedding` module. ## Motivation Currently, `torch.nn.Embedding` does not have `module_error_inputs_func` defined in `common_modules.py`. This PR adds error input tests to enable regression testing for error messages and follow the pattern already established for other modules (BatchNorm, GroupNorm, RNN cells, etc.). ## Test Cases Added 1. **Out of range indices**: Tests IndexError when index exceeds `num_embeddings` - Input: `Embedding(10, 3)` with indices `[0, 5, 15]` (15 is out of range) - Expected: `IndexError: index out of range in self` - **Note**: Only tested on CPU - CUDA triggers kernel assertion instead of Python exception 2. **Float indices**: Tests RuntimeError when float tensor used instead of Long/Int - Input: `Embedding(10, 3)` with float tensor `[1.5, 2.5]` - Expected: `RuntimeError: Expected tensor for argument pytorch#1 'indices' to have one of the following scalar types: Long, Int` 3. **Negative num_embeddings**: Tests RuntimeError when constructor has negative dimension - Input: `Embedding(-1, 3)` - Expected: `RuntimeError: Trying to create tensor with negative dimension` ## Test Environment - Tested on H200 GPU with CUDA 12.8 - Verified error messages match on CPU - CUDA tests pass for dtype and construction errors Fixes pytorch#174179 Pull Request resolved: pytorch#174180 Approved by: https://github.com/albanD
## Summary Add `module_error_inputs_torch_nn_Embedding` function to test error messages for invalid inputs to `nn.Embedding` module. ## Motivation Currently, `torch.nn.Embedding` does not have `module_error_inputs_func` defined in `common_modules.py`. This PR adds error input tests to enable regression testing for error messages and follow the pattern already established for other modules (BatchNorm, GroupNorm, RNN cells, etc.). ## Test Cases Added 1. **Out of range indices**: Tests IndexError when index exceeds `num_embeddings` - Input: `Embedding(10, 3)` with indices `[0, 5, 15]` (15 is out of range) - Expected: `IndexError: index out of range in self` - **Note**: Only tested on CPU - CUDA triggers kernel assertion instead of Python exception 2. **Float indices**: Tests RuntimeError when float tensor used instead of Long/Int - Input: `Embedding(10, 3)` with float tensor `[1.5, 2.5]` - Expected: `RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int` 3. **Negative num_embeddings**: Tests RuntimeError when constructor has negative dimension - Input: `Embedding(-1, 3)` - Expected: `RuntimeError: Trying to create tensor with negative dimension` ## Test Environment - Tested on H200 GPU with CUDA 12.8 - Verified error messages match on CPU - CUDA tests pass for dtype and construction errors Fixes #174179 Pull Request resolved: #174180 Approved by: https://github.com/albanD
# Motivation #174180 introduces `nn.Embedding` module, which causes CI failures on XPU. # Additional Context fix #178854 fix #178762 Pull Request resolved: #178987 Approved by: https://github.com/mikaylagawarecki
## Summary Add `module_error_inputs_torch_nn_Embedding` function to test error messages for invalid inputs to `nn.Embedding` module. ## Motivation Currently, `torch.nn.Embedding` does not have `module_error_inputs_func` defined in `common_modules.py`. This PR adds error input tests to enable regression testing for error messages and follow the pattern already established for other modules (BatchNorm, GroupNorm, RNN cells, etc.). ## Test Cases Added 1. **Out of range indices**: Tests IndexError when index exceeds `num_embeddings` - Input: `Embedding(10, 3)` with indices `[0, 5, 15]` (15 is out of range) - Expected: `IndexError: index out of range in self` - **Note**: Only tested on CPU - CUDA triggers kernel assertion instead of Python exception 2. **Float indices**: Tests RuntimeError when float tensor used instead of Long/Int - Input: `Embedding(10, 3)` with float tensor `[1.5, 2.5]` - Expected: `RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int` 3. **Negative num_embeddings**: Tests RuntimeError when constructor has negative dimension - Input: `Embedding(-1, 3)` - Expected: `RuntimeError: Trying to create tensor with negative dimension` ## Test Environment - Tested on H200 GPU with CUDA 12.8 - Verified error messages match on CPU - CUDA tests pass for dtype and construction errors Fixes pytorch#174179 Pull Request resolved: pytorch#174180 Approved by: https://github.com/albanD
# Motivation #174180 introduces `nn.Embedding` module, which causes CI failures on XPU. # Additional Context fix #178854 fix #178762 Pull Request resolved: #178987 Approved by: https://github.com/mikaylagawarecki
## Summary Add `module_error_inputs_torch_nn_Embedding` function to test error messages for invalid inputs to `nn.Embedding` module. ## Motivation Currently, `torch.nn.Embedding` does not have `module_error_inputs_func` defined in `common_modules.py`. This PR adds error input tests to enable regression testing for error messages and follow the pattern already established for other modules (BatchNorm, GroupNorm, RNN cells, etc.). ## Test Cases Added 1. **Out of range indices**: Tests IndexError when index exceeds `num_embeddings` - Input: `Embedding(10, 3)` with indices `[0, 5, 15]` (15 is out of range) - Expected: `IndexError: index out of range in self` - **Note**: Only tested on CPU - CUDA triggers kernel assertion instead of Python exception 2. **Float indices**: Tests RuntimeError when float tensor used instead of Long/Int - Input: `Embedding(10, 3)` with float tensor `[1.5, 2.5]` - Expected: `RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int` 3. **Negative num_embeddings**: Tests RuntimeError when constructor has negative dimension - Input: `Embedding(-1, 3)` - Expected: `RuntimeError: Trying to create tensor with negative dimension` ## Test Environment - Tested on H200 GPU with CUDA 12.8 - Verified error messages match on CPU - CUDA tests pass for dtype and construction errors Fixes pytorch#174179 Pull Request resolved: pytorch#174180 Approved by: https://github.com/albanD
# Motivation pytorch#174180 introduces `nn.Embedding` module, which causes CI failures on XPU. # Additional Context fix pytorch#178854 fix pytorch#178762 Pull Request resolved: pytorch#178987 Approved by: https://github.com/mikaylagawarecki
## Summary Add `module_error_inputs_torch_nn_Embedding` function to test error messages for invalid inputs to `nn.Embedding` module. ## Motivation Currently, `torch.nn.Embedding` does not have `module_error_inputs_func` defined in `common_modules.py`. This PR adds error input tests to enable regression testing for error messages and follow the pattern already established for other modules (BatchNorm, GroupNorm, RNN cells, etc.). ## Test Cases Added 1. **Out of range indices**: Tests IndexError when index exceeds `num_embeddings` - Input: `Embedding(10, 3)` with indices `[0, 5, 15]` (15 is out of range) - Expected: `IndexError: index out of range in self` - **Note**: Only tested on CPU - CUDA triggers kernel assertion instead of Python exception 2. **Float indices**: Tests RuntimeError when float tensor used instead of Long/Int - Input: `Embedding(10, 3)` with float tensor `[1.5, 2.5]` - Expected: `RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int` 3. **Negative num_embeddings**: Tests RuntimeError when constructor has negative dimension - Input: `Embedding(-1, 3)` - Expected: `RuntimeError: Trying to create tensor with negative dimension` ## Test Environment - Tested on H200 GPU with CUDA 12.8 - Verified error messages match on CPU - CUDA tests pass for dtype and construction errors Fixes pytorch#174179 Pull Request resolved: pytorch#174180 Approved by: https://github.com/albanD
Summary
Add
module_error_inputs_torch_nn_Embeddingfunction to test error messages for invalid inputs tonn.Embeddingmodule.Motivation
Currently,
torch.nn.Embeddingdoes not havemodule_error_inputs_funcdefined incommon_modules.py. This PR adds error input tests to enable regression testing for error messages and follow the pattern already established for other modules (BatchNorm, GroupNorm, RNN cells, etc.).Test Cases Added
Out of range indices: Tests IndexError when index exceeds
num_embeddingsEmbedding(10, 3)with indices[0, 5, 15](15 is out of range)IndexError: index out of range in selfFloat indices: Tests RuntimeError when float tensor used instead of Long/Int
Embedding(10, 3)with float tensor[1.5, 2.5]RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, IntNegative num_embeddings: Tests RuntimeError when constructor has negative dimension
Embedding(-1, 3)RuntimeError: Trying to create tensor with negative dimensionTest Environment
Fixes #174179
cc @mruberry @albanD @jbschlosser