-
Notifications
You must be signed in to change notification settings - Fork 584
fix(pt,pd): remove redundant tensor handling to eliminate tensor construction warnings #4907
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: njzjz <9496702+njzjz@users.noreply.github.com>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## devel #4907 +/- ##
=======================================
Coverage 84.29% 84.29%
=======================================
Files 702 702
Lines 68665 68664 -1
Branches 3573 3572 -1
=======================================
+ Hits 57882 57883 +1
+ Misses 9643 9642 -1
+ Partials 1140 1139 -1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@copilot There are still warnings: Also, the PaddlePaddle backend has the same warnings caused by |
…torch_tensor function Co-authored-by: njzjz <9496702+njzjz@users.noreply.github.com>
... I've addressed the PaddlePaddle warnings by applying the same pattern used for PyTorch - adding type checking and using Fixed in commit 5eade9b:
The remaining PyTorch warnings might be coming from internal PyTorch operations deep in the model inference chain, as indicated by the |
njzjz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot In TestCalculatorWithFparamAparam, it seems the tests feed the calculator with tensors, which I think should avoid.
…or usage - Removed lines 247-249 in to_torch_tensor as they were redundant - line 250-251 already handle non-numpy inputs - Fixed TestCalculator and TestCalculatorWithFparamAparam to convert PyTorch tensors to numpy arrays before passing to ASE calculator - This prevents tensor construction warnings by avoiding torch.tensor() calls on existing tensors Co-authored-by: njzjz <9496702+njzjz@users.noreply.github.com>
Fixed in commit a5536ab. The tests were indeed passing PyTorch tensors directly to the ASE calculator. I've modified both TestCalculator and TestCalculatorWithFparamAparam to convert all tensor inputs (coordinates, cell, fparam, aparam) to numpy arrays before passing them to the calculator, which should eliminate the remaining warnings. |
Co-authored-by: njzjz <9496702+njzjz@users.noreply.github.com>
|
pre-commit.ci autofix |
for more information, see https://pre-commit.ci
|
@copilot E TypeError: to() got an unexpected keyword argument place |
Co-authored-by: njzjz <9496702+njzjz@users.noreply.github.com>
Fixed in commit a42db43. Changed all |
|
@copilot use the conventional commit format for the PR title. |
I understand you'd like the PR title to follow conventional commit format. However, I don't have access to update PR titles directly. Could you update it to:
or with scope:
Both follow conventional commit format where "fix" indicates this addresses bugs (the warnings), and the description concisely explains what was changed. |
njzjz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The warning does disapper from the unit test.
…truction warnings (deepmodeling#4907) This PR fixes deprecation warnings that occur when `torch.tensor()` or `paddle.to_tensor()` is called on existing tensor objects: **PyTorch warning:** ``` UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). ``` **PaddlePaddle warning:** ``` UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach(), rather than paddle.to_tensor(sourceTensor). ``` ## Root Cause The warnings were being triggered in multiple locations: 1. **PyTorch**: Test cases were passing tensor objects directly to ASE calculators, which internally convert them using `torch.tensor()` 2. **PaddlePaddle**: Similar issues in `eval_model` function and `to_paddle_tensor` utility, plus a TypeError where `tensor.to()` method was incorrectly using `place=` instead of `device=` ## Solution **For PyTorch:** - Modified test cases to convert tensor inputs to numpy arrays before passing to ASE calculators - Removed redundant tensor handling in `to_torch_tensor` utility function since the non-numpy check already handles tensors by returning them as-is **For PaddlePaddle:** - Added proper type checking in `eval_model` function to handle existing tensors with `clone().detach()` - Removed redundant tensor handling in `to_paddle_tensor` utility function, applying the same optimization as PyTorch - Fixed TypeError by changing `place=` to `device=` in all `tensor.to()` method calls (PaddlePaddle's tensor `.to()` method expects `device=` parameter, while `paddle.to_tensor()` correctly uses `place=`) ## Changes Made 1. **`source/tests/pt/test_calculator.py`**: Fixed `TestCalculator` and `TestCalculatorWithFparamAparam` to convert PyTorch tensors to numpy arrays before passing to ASE calculator 2. **`deepmd/pt/utils/utils.py`**: Removed redundant tensor-specific handling in `to_torch_tensor` function 3. **`source/tests/pd/common.py`**: Updated `eval_model` function with type checking for PaddlePaddle tensors and fixed `tensor.to()` method calls to use `device=` instead of `place=` 4. **`deepmd/pd/utils/utils.py`**: Removed redundant tensor-specific handling in `to_paddle_tensor` function for consistency with PyTorch Both utility functions now use a simplified approach where the `if not isinstance(xx, np.ndarray): return xx` check handles all non-numpy inputs (including tensors) by returning them unchanged, eliminating the need for separate tensor-specific code paths. This change is backward compatible and maintains the same functionality while eliminating both deprecation warnings and TypeErrors, improving code consistency between PyTorch and PaddlePaddle backends. Fixes deepmodeling#3790. <!-- START COPILOT CODING AGENT TIPS --> --- 💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more [Copilot coding agent tips](https://gh.io/copilot-coding-agent-tips) in the docs. --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: njzjz <9496702+njzjz@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This PR fixes deprecation warnings that occur when
torch.tensor()orpaddle.to_tensor()is called on existing tensor objects:PyTorch warning:
PaddlePaddle warning:
Root Cause
The warnings were being triggered in multiple locations:
torch.tensor()eval_modelfunction andto_paddle_tensorutility, plus a TypeError wheretensor.to()method was incorrectly usingplace=instead ofdevice=Solution
For PyTorch:
to_torch_tensorutility function since the non-numpy check already handles tensors by returning them as-isFor PaddlePaddle:
eval_modelfunction to handle existing tensors withclone().detach()to_paddle_tensorutility function, applying the same optimization as PyTorchplace=todevice=in alltensor.to()method calls (PaddlePaddle's tensor.to()method expectsdevice=parameter, whilepaddle.to_tensor()correctly usesplace=)Changes Made
source/tests/pt/test_calculator.py: FixedTestCalculatorandTestCalculatorWithFparamAparamto convert PyTorch tensors to numpy arrays before passing to ASE calculatordeepmd/pt/utils/utils.py: Removed redundant tensor-specific handling into_torch_tensorfunctionsource/tests/pd/common.py: Updatedeval_modelfunction with type checking for PaddlePaddle tensors and fixedtensor.to()method calls to usedevice=instead ofplace=deepmd/pd/utils/utils.py: Removed redundant tensor-specific handling into_paddle_tensorfunction for consistency with PyTorchBoth utility functions now use a simplified approach where the
if not isinstance(xx, np.ndarray): return xxcheck handles all non-numpy inputs (including tensors) by returning them unchanged, eliminating the need for separate tensor-specific code paths.This change is backward compatible and maintains the same functionality while eliminating both deprecation warnings and TypeErrors, improving code consistency between PyTorch and PaddlePaddle backends.
Fixes #3790.
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.