Skip to content

Fix simstate concatenation [2/2]#232

Merged
curtischong merged 7 commits intomainfrom
fix-simstate-concatenation
Aug 8, 2025
Merged

Fix simstate concatenation [2/2]#232
curtischong merged 7 commits intomainfrom
fix-simstate-concatenation

Conversation

@curtischong
Copy link
Collaborator

@curtischong curtischong commented Aug 2, 2025

see #219

Summary

This is actually kinda a serious issue and I'll outline it here in a clear manner.

MD SimStates often track velocity. But on the first iteration, the states do NOT have velocity - so they are currently initialized as none.

But once the optimizer gets going, these states end up having a velocity attribute.

The problem is how we concatenate SimStates. Inside the autobatcher, when some SimStates finish before others, we swap those finished states with fresh states. This means inside the entire SimState, we have some systems with velocity set to none (since they were just swapped in and are fresh) and other systems with a set velocity.

When we concatenate these "mixed" SimStates (during the optimization process), we do torch.concatenate([torch.Tensor, none, none]). Where the first system's velocity exists (because it's a torch.Tensor, and the last 2 systems do NOT have a velocity - since they were just swapped in by the autobatcher.

PyTmorch cannot concatenate this because we're passing in none as an input which is invalid.

@t-reents 's solution works pretty well and is valid (which is why I'm touching it up in this PR). His solution is: "rather than initializing vector attributes as none, we initiliaze it as nan so we can do torch.concatentate between states that are old, and states that have just been swapped in.

What's in this PR?

This PR is an addition to @t-reents 's contribution. I added validation logic to ensure that all subclasses of SimState will validate to ensure that no torch.Tensor object can be | None because it breaks concatenate between SimStates.

I think @t-reents 's solution works fine for the Fire Optimizers since by setting velocities to 0, they do not contribute to teh power calculation of the next iteration.

Checklist

Before a pull request can be merged, the following items must be checked:

  • Doc strings have been added in the Google docstring format.
    Run ruff on your code.
  • Tests have been added for any new functionality or bug fixes.

We highly recommended installing the pre-commit hooks running in CI locally to speedup the development process. Simply run pip install pre-commit && pre-commit install to install the hooks which will check your code before each commit.

Summary by CodeRabbit

  • Bug Fixes

    • Resolved issues where certain attributes could be set to None, causing errors during tensor operations. All relevant tensor attributes are now required to always be present.
  • Tests

    • Enhanced test coverage to verify that subclasses cannot declare tensor attributes as optional.
    • Improved parameterization in tests to check behavior under different batch step scenarios.

@cla-bot cla-bot bot added the cla-signed label Aug 2, 2025
@coderabbitai
Copy link

coderabbitai bot commented Aug 2, 2025

Walkthrough

A new __init_subclass__ method was added to SimState to enforce that subclasses cannot have tensor attributes typed as torch.Tensor | None. Several dataclasses in optimizers.py and a dataclass in runners.py were updated to remove optional tensor types. Tests were added and updated to verify and parameterize these behaviors.

Changes

Cohort / File(s) Change Summary
SimState Subclass Enforcement
torch_sim/state.py
Added __init_subclass__ to SimState to prohibit tensor attributes being optionally None in subclasses; raises TypeError if violated.
Optimizer State Annotations
torch_sim/optimizers.py
Changed type annotations for velocities and cell_velocities in FireState, UnitCellFireState, and FrechetCellFIREState from `torch.Tensor
Runner State Annotations
torch_sim/runners.py
Changed forces and stress in StaticState dataclass from `torch.Tensor
Autobatching Test Parametrization
tests/test_autobatching.py
Parameterized test_in_flight_with_fire to test with different batch step counts; replaced fixed loop count with parameter.
SimState Subclass Enforcement Test
tests/test_state.py
Added test to ensure subclassing SimState with a `torch.Tensor

Sequence Diagram(s)

sequenceDiagram
    participant Dev as Developer
    participant SimState as SimState (base class)
    participant Subclass as SimState Subclass

    Dev->>Subclass: Define subclass with tensor attribute
    Subclass->>SimState: Triggers __init_subclass__
    SimState->>SimState: Inspect type annotations
    alt Attribute is torch.Tensor | None
        SimState-->>Dev: Raise TypeError
    else Attribute is valid
        SimState->>Subclass: Complete initialization
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15 minutes

Possibly related PRs

  • Radical-AI/torch-sim#231: Also modifies SimState to make an attribute non-optional, directly related to type enforcement in subclasses.
  • Radical-AI/torch-sim#219: Fixes runtime issues with optional tensor attributes, related in intent to the type enforcement introduced here.

Poem

A rabbit hops through fields of type,
Ensuring tensors never hide in None’s disguise.
With subclass rules and tests so bright,
Each state is strong—no need for compromise!
🐇✨

No more None in tensor’s den,
The code is clean—let’s hop again!

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 84d6750 and 3d23f7d.

📒 Files selected for processing (5)
  • tests/test_autobatching.py (2 hunks)
  • tests/test_state.py (1 hunks)
  • torch_sim/optimizers.py (2 hunks)
  • torch_sim/runners.py (1 hunks)
  • torch_sim/state.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (5)
  • tests/test_state.py
  • torch_sim/runners.py
  • torch_sim/optimizers.py
  • tests/test_autobatching.py
  • torch_sim/state.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (45)
  • GitHub Check: test-examples (examples/scripts/1_Introduction/1.3_Fairchem.py)
  • GitHub Check: test-examples (examples/scripts/1_Introduction/1.1_Lennard_Jones.py)
  • GitHub Check: test-examples (examples/scripts/7_Others/7.4_Velocity_AutoCorrelation.py)
  • GitHub Check: test-examples (examples/scripts/2_Structural_optimization/2.7_MACE_FrechetCellFilter_FIRE.py)
  • GitHub Check: test-examples (examples/scripts/7_Others/7.6_Compare_ASE_to_VV_FIRE.py)
  • GitHub Check: test-examples (examples/scripts/6_Phonons/6.1_Phonons_MACE.py)
  • GitHub Check: test-examples (examples/scripts/3_Dynamics/3.9_MACE_NVT_staggered_stress.py)
  • GitHub Check: test-examples (examples/scripts/2_Structural_optimization/2.4_MACE_FIRE.py)
  • GitHub Check: test-examples (examples/scripts/4_High_level_api/4.2_auto_batching_api.py)
  • GitHub Check: test-examples (examples/scripts/2_Structural_optimization/2.6_MACE_UnitCellFilter_FIRE.py)
  • GitHub Check: test-examples (examples/scripts/5_Workflow/5.3_Elastic.py)
  • GitHub Check: test-examples (examples/tutorials/low_level_tutorial.py)
  • GitHub Check: test-examples (examples/scripts/1_Introduction/1.2_MACE.py)
  • GitHub Check: test-examples (examples/scripts/3_Dynamics/3.10_Hybrid_swap_mc.py)
  • GitHub Check: test-examples (examples/scripts/6_Phonons/6.3_Conductivity_MACE.py)
  • GitHub Check: test-examples (examples/scripts/2_Structural_optimization/2.3_MACE_Gradient_Descent.py)
  • GitHub Check: test-examples (examples/scripts/3_Dynamics/3.12_MACE_NPT_Langevin.py)
  • GitHub Check: test-examples (examples/scripts/5_Workflow/5.1_a2c_silicon_batched.py)
  • GitHub Check: test-examples (examples/scripts/6_Phonons/6.2_QuasiHarmonic_MACE.py)
  • GitHub Check: test-examples (examples/scripts/4_High_level_api/4.1_high_level_api.py)
  • GitHub Check: test-examples (examples/scripts/3_Dynamics/3.5_MACE_NVT_Nose_Hoover.py)
  • GitHub Check: test-examples (examples/tutorials/reporting_tutorial.py)
  • GitHub Check: test-examples (examples/scripts/3_Dynamics/3.4_MACE_NVT_Langevin.py)
  • GitHub Check: test-examples (examples/scripts/3_Dynamics/3.3_MACE_NVE_cueq.py)
  • GitHub Check: test-examples (examples/tutorials/high_level_tutorial.py)
  • GitHub Check: test-examples (examples/scripts/3_Dynamics/3.8_MACE_NPT_Nose_Hoover.py)
  • GitHub Check: test-examples (examples/scripts/3_Dynamics/3.13_MACE_NVE_non_pbc.py)
  • GitHub Check: test-examples (examples/scripts/3_Dynamics/3.6_MACE_NVT_Nose_Hoover_temp_profile.py)
  • GitHub Check: test-examples (examples/scripts/3_Dynamics/3.2_MACE_NVE.py)
  • GitHub Check: test-examples (examples/tutorials/autobatching_tutorial.py)
  • GitHub Check: test-model (ubuntu-latest, 3.12, lowest-direct, sevenn, tests/models/test_sevennet.py)
  • GitHub Check: test-model (ubuntu-latest, 3.12, lowest-direct, mace, tests/test_optimizers_vs_ase.py)
  • GitHub Check: test-model (ubuntu-latest, 3.12, lowest-direct, orb, tests/models/test_orb.py)
  • GitHub Check: test-model (ubuntu-latest, 3.11, highest, sevenn, tests/models/test_sevennet.py)
  • GitHub Check: test-model (ubuntu-latest, 3.12, lowest-direct, mace, tests/test_elastic.py)
  • GitHub Check: test-model (ubuntu-latest, 3.12, lowest-direct, fairchem, tests/models/test_fairchem.py)
  • GitHub Check: test-model (ubuntu-latest, 3.11, highest, mace, tests/test_elastic.py)
  • GitHub Check: test-model (ubuntu-latest, 3.11, highest, orb, tests/models/test_orb.py)
  • GitHub Check: test-model (ubuntu-latest, 3.11, highest, mace, tests/test_optimizers_vs_ase.py)
  • GitHub Check: test-model (ubuntu-latest, 3.11, highest, graphpes, tests/models/test_graphpes.py)
  • GitHub Check: test-model (ubuntu-latest, 3.11, highest, fairchem, tests/models/test_fairchem.py)
  • GitHub Check: test-core (macos-14, 3.12, lowest-direct)
  • GitHub Check: test-core (ubuntu-latest, 3.12, lowest-direct)
  • GitHub Check: test-core (ubuntu-latest, 3.11, highest)
  • GitHub Check: build-docs
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix-simstate-concatenation

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@curtischong curtischong changed the base branch from main to make-system-idx-non-optional August 2, 2025 23:21
@curtischong curtischong force-pushed the fix-simstate-concatenation branch from 7d1f3f2 to 570a2b2 Compare August 2, 2025 23:22
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
torch_sim/state.py (2)

140-145: Address the TODO about system index validation reliability.

The comment indicates uncertainty about the reliability of the consecutive integer validation logic. Consider implementing a more robust check or documenting the specific edge cases this might miss.

Would you like me to suggest a more reliable validation approach for ensuring system indices are unique consecutive integers starting from 0?


425-429: Complete the TODO about InitVar guidance.

The comment suggests providing guidance about using InitVar for attributes with default values, but the implementation appears incomplete. Consider either implementing this feature or removing the TODO if it's no longer relevant.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 16bf8f8 and 7d1f3f2.

📒 Files selected for processing (2)
  • torch_sim/integrators/nvt.py (1 hunks)
  • torch_sim/state.py (6 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
torch_sim/integrators/nvt.py (1)
torch_sim/integrators/npt.py (3)
  • npt_nose_hoover_init (1348-1494)
  • npt_nose_hoover (899-1560)
  • NPTNoseHooverState (807-896)
🪛 Ruff (0.12.2)
torch_sim/state.py

405-405: import should be at the top-level of a file

(PLC0415)

🔇 Additional comments (4)
torch_sim/state.py (3)

25-25: LGTM! Good refactoring approach.

Moving to init=False with a custom constructor provides better control over initialization logic and validation.


83-83: Good design for handling optional system indices.

The pattern of declaring system_idx as a required tensor field while accepting None in the constructor and converting it to a zero tensor is excellent. This ensures the field is always initialized and avoids concatenation issues with None values.

Also applies to: 92-92, 134-139


405-405: Local import is appropriate here.

The local import of typing inside __init_subclass__ is intentional and appropriate for this use case, likely to avoid circular imports or reduce module load time. The static analysis warning can be safely ignored.

torch_sim/integrators/nvt.py (1)

392-392: Essential fix for system index propagation.

Adding system_idx=state.system_idx ensures proper propagation of system indexing information to the Nose-Hoover state, which is critical for batched simulations. This change correctly aligns with the NPT implementation and the updated SimState initialization logic.

@curtischong curtischong marked this pull request as draft August 2, 2025 23:23
@curtischong curtischong changed the title Fix simstate concatenation Fix simstate concatenation [2/3] Aug 2, 2025
@curtischong curtischong changed the title Fix simstate concatenation [2/3] Fix simstate concatenation [1/2] Aug 2, 2025
@curtischong curtischong changed the title Fix simstate concatenation [1/2] Fix simstate concatenation [2/2] Aug 2, 2025
@curtischong curtischong force-pushed the fix-simstate-concatenation branch from 2701b5c to 4c49f21 Compare August 3, 2025 21:17
Base automatically changed from make-system-idx-non-optional to main August 7, 2025 01:43
@curtischong curtischong marked this pull request as ready for review August 8, 2025 01:42
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (2)
torch_sim/runners.py (1)

534-565: Critical inconsistency: dataclass requires tensors but None is still passed.

The StaticState dataclass now requires forces and stress to be torch.Tensor, but the instantiation code at lines 563-564 still conditionally passes None when the model doesn't compute these properties. This will cause runtime errors.

Apply this diff to fix the issue by providing appropriate tensor defaults:

        sub_state = StaticState(
            **vars(sub_state),
            energy=model_outputs["energy"],
-            forces=model_outputs["forces"] if model.compute_forces else None,
-            stress=model_outputs["stress"] if model.compute_stress else None,
+            forces=model_outputs.get("forces", torch.full_like(sub_state.positions, float('nan'))),
+            stress=model_outputs.get("stress", torch.full((sub_state.n_systems, 3, 3), float('nan'), device=sub_state.device, dtype=sub_state.dtype)),
        )

Alternatively, the dataclass could be reverted to allow optional tensors if this behavior is intentional, but that would conflict with the PR's objectives.

tests/test_autobatching.py (1)

495-497: Fix undefined variable ‘state’ before first next_batch call.

state is referenced before assignment on the initial call to next_batch. Initialize it to None (to fetch the first batch from the internal queue) before the loop.

-    all_completed_states, convergence_tensor = [], None
-    while True:
-        state, completed_states = batcher.next_batch(state, convergence_tensor)
+    all_completed_states, convergence_tensor = [], None
+    state = None  # initialize: fetch first batch from internal queue
+    while True:
+        state, completed_states = batcher.next_batch(state, convergence_tensor)
♻️ Duplicate comments (1)
tests/test_autobatching.py (1)

451-459: Nice parametrize; this addresses the prior DRY feedback.

This change eliminates duplicated tests while preserving both scenarios.

🧹 Nitpick comments (1)
tests/test_autobatching.py (1)

451-459: Optional: add ids to parameterization for clearer test output.

This improves readability in pytest reports.

-@pytest.mark.parametrize(
-    "num_steps_per_batch",
-    [
-        5,  # At 5 steps, not every state will converge before the next batch.
-        #       This tests the merging of partially converged states with new states
-        #       which has been a bug in the past. See https://github.com/Radical-AI/torch-sim/pull/219
-        10,  # At 10 steps, all states will converge before the next batch
-    ],
-)
+@pytest.mark.parametrize(
+    "num_steps_per_batch",
+    [5, 10],
+    ids=["partial_convergence", "full_convergence"],
+)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7d1f3f2 and 4c49f21.

📒 Files selected for processing (6)
  • tests/test_autobatching.py (2 hunks)
  • tests/test_state.py (1 hunks)
  • torch_sim/models/interface.py (1 hunks)
  • torch_sim/optimizers.py (7 hunks)
  • torch_sim/runners.py (1 hunks)
  • torch_sim/state.py (6 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
torch_sim/models/interface.py (1)
torch_sim/models/mattersim.py (1)
  • MatterSimModel (24-33)
tests/test_autobatching.py (1)
tests/test_optimizers.py (3)
  • test_fire_fixed_cell_unit_cell_consistency (785-879)
  • test_unit_cell_fire_multi_batch (709-782)
  • test_fire_optimization (113-177)
torch_sim/state.py (2)
torch_sim/integrators/md.py (1)
  • MDState (13-48)
torch_sim/monte_carlo.py (1)
  • SwapMCState (22-36)
🔇 Additional comments (8)
torch_sim/state.py (5)

9-9: LGTM!

The typing import is correctly added to support the new __init_subclass__ method's type inspection functionality.


26-26: LGTM!

Setting init=False is correct when providing a custom __init__ method, preventing conflicts with the auto-generated constructor.


84-84: LGTM!

Changing system_idx to strictly torch.Tensor aligns with the PR objective of preventing concatenation issues with mixed tensor/None attributes. The optional behavior is preserved through the constructor parameter.


86-158: LGTM!

The custom __init__ method correctly consolidates initialization and validation logic. Key improvements:

  • Proper handling of optional system_idx parameter with default to zeros tensor
  • Preserved device compatibility validation
  • Maintained shape compatibility checks
  • Correct cell dimension handling

The implementation maintains existing functionality while supporting the new tensor attribute restrictions.


401-426: LGTM!

Excellent implementation of the subclass validation mechanism. The method correctly:

  • Uses typing.get_type_hints() for proper type inspection
  • Handles both typing.Union and Python 3.10+ | union syntax
  • Provides clear error messaging explaining the concatenation issue
  • Follows proper __init_subclass__ patterns with super() call

This effectively prevents the tensor concatenation issues described in the PR objectives.

tests/test_state.py (1)

648-658: LGTM!

Excellent test coverage for the new __init_subclass__ validation mechanism. The test correctly:

  • Uses pytest.raises to capture the expected TypeError
  • Defines a subclass with the prohibited torch.Tensor | None type annotation
  • Validates the error message mentions the concatenation issue
  • Ensures the restriction is properly enforced
tests/test_autobatching.py (1)

485-493: Align dtype in scatter_reduce buffer with forces dtype.

On some PyTorch versions, scatter_reduce requires matching dtypes between input and src. Use state.forces.dtype (or state.energy.dtype) instead of hardcoded float64 to avoid type promotion or runtime errors.

-        system_wise_max_force = torch.zeros(
-            state.n_systems, device=state.device, dtype=torch.float64
-        )
+        system_wise_max_force = torch.zeros(
+            state.n_systems, device=state.device, dtype=state.forces.dtype
+        )
torch_sim/optimizers.py (1)

592-595: Initializing velocities/cell_velocities with NaN is the right call.

This removes None from tensor attributes, unblocks concatenation, and the step functions zero them on first use. LGTM.

Also applies to: 867-869, 875-879, 1170-1173, 1178-1181

@curtischong curtischong force-pushed the fix-simstate-concatenation branch from 4c49f21 to 84d6750 Compare August 8, 2025 03:14
@curtischong curtischong marked this pull request as draft August 8, 2025 03:19
@curtischong
Copy link
Collaborator Author

I think this PR will not totally fix the issues. This is because inside _split_state we have this function:

def split_attr(
    attr_value: torch.Tensor | None, split_sizes: list[int]
) -> list[torch.Tensor | None]:
    return (
        [None] * n_systems
        if attr_value is None
        else torch.split(attr_value, split_sizes)
    )

Copy link
Member

@CompRhys CompRhys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. When merging I will insert @t-reents email so he gets shared credit on the PR

@CompRhys CompRhys marked this pull request as ready for review August 8, 2025 13:28
@curtischong
Copy link
Collaborator Author

curtischong commented Aug 8, 2025

I would like to add more tests for the split and concatenate states.

@curtischong
Copy link
Collaborator Author

LGTM. When merging I will insert @t-reents email so he gets shared credit on the PR

we should just merge his PR first. then this can be a follow-up PR which adds the extra checks.

@curtischong curtischong force-pushed the fix-simstate-concatenation branch from 6755e8b to 3d23f7d Compare August 8, 2025 14:03
Copy link
Collaborator

@orionarcher orionarcher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@t-reents
Copy link
Contributor

t-reents commented Aug 8, 2025

Thanks guys for acknowledging my PR and thanks to @curtischong for your additional work on top of it!

@curtischong
Copy link
Collaborator Author

curtischong commented Aug 8, 2025

Since a few ppl have already seen this PR it's probably best to add the extra tests in another PR. I'll merge this in.

@curtischong curtischong merged commit 71e1d41 into main Aug 8, 2025
93 checks passed
@curtischong curtischong deleted the fix-simstate-concatenation branch August 8, 2025 14:25
@coderabbitai coderabbitai bot mentioned this pull request Aug 10, 2025
2 tasks
@coderabbitai coderabbitai bot mentioned this pull request Aug 30, 2025
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants