-
Notifications
You must be signed in to change notification settings - Fork 584
[fix bug] load atomic_*.npy for tf tensor model #4538
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThe pull request involves modifications to two files in the deepmd library, focusing on terminology changes related to atomic properties and placeholders. In Changes
Sequence DiagramsequenceDiagram
participant Loss as TensorLoss
participant Trainer as DPTrainer
Loss ->> Loss: Update label naming
Note over Loss: Change "atom_" to "atomic_"
Trainer ->> Trainer: Transform placeholders
Note over Trainer: Convert "atomic_" back to "atom_"
The sequence diagram illustrates the key transformations in the label and placeholder naming conventions, showing how the changes are applied in the loss and training components. Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
deepmd/tf/loss/tensor.py(2 hunks)deepmd/tf/train/trainer.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
- GitHub Check: Test C++ (false)
- GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
- GitHub Check: Test C++ (true)
🔇 Additional comments (2)
deepmd/tf/loss/tensor.py (1)
158-158: Confirm consistency with renamed placeholders.
You've updated"atom_" + self.label_nameto"atomic_" + self.label_name. However, intrainer.py, placeholders are renamed from"atomic_"back to"atom_". Ensure that the rest of the code consistently references the updated label to avoid key mismatches.deepmd/tf/train/trainer.py (1)
284-290: Verify potential placeholder mismatch.
This loop renames all placeholders from"atomic_"to"atom_", which seemingly contradicts the new naming convention intensor.py. Please verify that any references to"atomic_*"placeholders still function correctly after this transformation, and confirm that this rename is truly intended.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## devel #4538 +/- ##
==========================================
- Coverage 84.57% 84.57% -0.01%
==========================================
Files 675 675
Lines 63695 63698 +3
Branches 3488 3486 -2
==========================================
+ Hits 53872 53874 +2
Misses 8698 8698
- Partials 1125 1126 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Fix bug mentioned in deepmodeling#4536 <!-- This is an auto-generated comment: release notes by coderabbit.ai --> - **Bug Fixes** - Updated atomic property and weight label naming conventions across the machine learning training and loss components to ensure consistent terminology. - Corrected placeholder key references in the training process to match updated label names. <!-- end of auto-generated comment: release notes by coderabbit.ai --> (cherry picked from commit 380efb9)
Fix bug mentioned in #4536
Summary by CodeRabbit