Nc/single out message#5
Merged
Nuno Campos (nfcampos) merged 2 commits intomainfrom Oct 5, 2023
Merged
Conversation
- The first message published to OUT ends the computation - .invoke() returns a single value or None - .stream() returns iterator of log entries, including the final OUT entry, if any - Remove .peek() method
5 tasks
4 tasks
4 tasks
xingshuozhu1998
pushed a commit
to xingshuozhu1998/langgraph
that referenced
this pull request
May 1, 2026
…sage Nc/single out message
Alaina Hardie (trianglegrrl)
added a commit
to trianglegrrl/langgraph
that referenced
this pull request
May 6, 2026
…angchain-ai#7 Comprehensive handoff for a fresh-context agent picking up the combined Step 1.2b + 1.3 milestone partway through. Covers: - Where we are (4 foundation commits landed: 720f5b0, 5325311, 3339ea4, 39206f3). - What each foundation commit delivered (architectural surface, parity gates). - Verification block (181 cargo, clippy clean, 73/73 + 49 + reject, 58/58 conformance, 69 parity-gate tests). - Sub-task plan for the remaining four (#4c channel translation, langchain-ai#5 StateGraph compiler, langchain-ai#6 langgraph_rs.backend monkeypatch, langchain-ai#7 87-test parity gate green). - Lessons learned this session: Python::with_gil → Python::attach in PyO3 0.28; PyAnyMethods::downcast deprecation; uv pip install needs VIRTUAL_ENV explicit; background maturin build can race edits; pytest-asyncio not in bridge venv (use anyio); adding Op variants requires updating all hand-coded matches; clippy 1.95 is_multiple_of + collapsed-if-let-chain lints; maturin python-source switch needed for langchain-ai#6; PyErr stash side-channel pattern for cross-Rust exception class preservation; json round- trip is fast enough for value translation. - Open follow-ups snapshot. - Sub-task tracking table. Companion to STEP-1.2A-HANDOFF.md and SESSION-RESUME.md — read all three on resume.
Alaina Hardie (trianglegrrl)
added a commit
to trianglegrrl/langgraph
that referenced
this pull request
May 6, 2026
…ler (V0.1 scope) Minimum-viable Rust `StateGraph` builder that compiles to a runnable `PregelLoop`. Sub-task langchain-ai#5 of the combined Step 1.2b + 1.3 milestone (`rust/docs/STEP-1.2B-PARTIAL-HANDOFF.md`); satisfies the original Step 1.3 sub-gate (5 fixture graphs trace-equal vs Python `StateGraph`). The full Python `StateGraph` is 1833 lines + branch helpers; the V0.1 port is sharply scoped to what the 5-fixture sub-gate needs and what the 87-test gate (langchain-ai#7) ultimately requires from the compiler. Everything beyond that is documented as deferred to follow-ups so langchain-ai#5 doesn't drag features the runner monkeypatch (langchain-ai#6) doesn't need. What landed ----------- - New `crates/langgraph-core/src/state_graph/mod.rs`: - `StateGraph::new(channels)` (explicit channel map; no `Annotated[T, reducer]` schema inference). - `add_node`, `add_edge`, `add_conditional_edges`, `set_entry_point`, `set_finish_point`, `compile`. - `compile()` lowers to a `PregelLoop` by generating synthetic `branch:to:NODE` `LastValue<Value>` trigger channels for every incoming-edge target. User node callables are wrapped to emit sentinel writes for direct outgoing edges + conditional-branch resolutions after the user's state-channel writes. - `START` / `END` constants. `BRANCH_PREFIX` reserved namespace (compile rejects collisions). `START -> node` edges return the corresponding synthetic input channel via `CompiledGraph.input_channels` so the caller knows what to put_input. - 9 cargo unit tests covering compile validation + linear chain + conditional fork + fan-out + branch error path. - New `rust/ffi/langgraph-py/src/state_graph_fixtures.rs` — bridge module that builds the 5 fixture graphs (linear_chain, conditional_fork, fan_out, conditional_join, recursion) via the new `StateGraph` builder. New PyO3 entry point `run_state_graph_fixture(name, init_json) -> trace_json`. - New `parity/scripts/test_state_graph_via_bridge.py` — 25 tests driving each fixture against the upstream Python `StateGraph` and comparing user-visible state + node execution sequence. Out of scope (V0.1, deferred to follow-ups) ------------------------------------------- - Schema inference from `Annotated[T, reducer]`. Caller passes a `dyn ChannelKind` map directly. Rationale: Rust has no runtime reflection over `Annotated`-style metadata; bringing that surface in is a Step 4.5-style concern (Phase 0 follow-up langchain-ai#2). - Subgraphs. `add_node` does not accept a nested `CompiledGraph`. - `defer=True` deferred nodes. - Async-only nodes / `astream`. Sub-step langchain-ai#6 owns the async monkeypatch path. - Runtime context object. - `add_sequence` (chains of nodes). - Node return-value coercion. Rust nodes return explicit `Vec<Write>`; Python's "return dict → infer state writes" is handled at the runner boundary in langchain-ai#6. Parity gate ----------- For each of the 5 fixtures: build the same logical graph with Rust `StateGraph` AND Python `StateGraph`, drive with the same input, compare: * user-visible state-channel final values (must match); * node execution order (must match for deterministic graphs); * for parallel branches (fan_out, conditional_join), the *set* of nodes that fired per superstep (parallel ordering canonicalised by Pregel). What the gate caught -------------------- - `recursion` final counter matches Python; total `step` fire count is documented-divergent: Python's `add_conditional_edges` evaluates the branch on POST-write state while the V0.1 Rust builder evaluates on PRE-write state. Same divergence as the Step 1.2a hand-rolled recursion fixture; final-state parity is the actual claim. - Branch path-map keys must match the resolved-key lookup. An unknown key surfaces as `PregelError::NodeFailed { node, message }` (the same path Python exception classes use in #4b). - `thiserror` magic-treats fields named `source` as `#[source]` — caught at compile time, renamed to `node`. - `PregelLoop` / `CompiledGraph` need explicit `Debug` (the bridge fields are PyO3-flavoured and don't auto-derive). Manual impl on `CompiledGraph` keeps the public surface usable from `unwrap_err()` in tests. Test counts ----------- - Cargo workspace: 220 passed (was 211; +9 state_graph unit tests). Clippy clean. - Phase 0: 73/73 + 49 + strict reject; 58 conformance. - Phase 1 + 1.2b foundation + #4c + langchain-ai#5: 129 passed (was 104; +25 StateGraph parity tests).
Alaina Hardie (trianglegrrl)
added a commit
to trianglegrrl/langgraph
that referenced
this pull request
May 6, 2026
Alaina Hardie (trianglegrrl)
added a commit
to trianglegrrl/langgraph
that referenced
this pull request
May 6, 2026
…doff for langchain-ai#6/langchain-ai#7 Milestone update for the combined Step 1.2b + 1.3 final stretch. Locked architectural decision (2026-05-06) ------------------------------------------ The original plan §6 and `phase-1-followups.md` entry langchain-ai#3 §5 left the door open to "a tighter cut decided in implementation" for the `langgraph_rs.backend` monkeypatch — i.e., replacing only `tick()` (approach B) or only `_algo.apply_writes` + `prepare_next_tasks` (approach C) instead of the full `SyncPregelLoop` (approach A). The user has explicitly chosen approach A: full `SyncPregelLoop` replacement. Reasoning captured in the new handoff doc: * A is the only approach where `LANGGRAPH_BACKEND=rust` actually means "Rust drives the loop" — B and C still leave Python orchestrating most per-tick work. * B's per-tick re-sync of channel state is wasteful and adds an extra parity surface that's correctness risk we don't need. * C is essentially a third copy of `test_pregel_differential.py`'s coverage — buys us nothing new. * "Done right the first time" — the full replacement is bigger but architecturally honest; a tighter cut is technical debt that would need to be redone before Step 1.4 streaming or Phase 2. The "re-build from checkpoint each tick" guidance from the original phase-1-followups langchain-ai#3 §6 is also superseded: under approach A, Rust state is constructed once at `__enter__` (Python → Rust via `_channel_translate.extract_state`) and applied once at `__exit__` (Rust → Python via `apply_state`). No per-tick re-sync. What this commit changes ------------------------ - New `rust/docs/STEP-1.2B-FINAL-HANDOFF.md`: the comprehensive handoff brief for the next session picking up langchain-ai#6 and langchain-ai#7. Covers: * Where we are (status table through `c03c7ac6`). * Locked architectural decision (approach A). * langchain-ai#6 sub-step breakdown (#6a Maturin layout switch → #6b backend.py monkeypatch → #6c Pregel runtime bridge entry point). * langchain-ai#7 iteration loop (87-test gate). * `__init__.py` re-export shim contents (drop-in for the layout switch). * Replaced symbols list pattern for `backend.py`. * `RustBackendUnsupported` rejection sites for the 4 deliberately out-of-scope feature families (custom channels, subgraphs, Send, interrupts, stream modes outside values/updates). * Verification block, hard rules, bridge install gotcha, lessons-learned forwarding from prior handoffs. - `rust/docs/STEP-1.2B-PARTIAL-HANDOFF.md`: prepended a SUPERSEDED notice pointing at the new final handoff for langchain-ai#6/langchain-ai#7. The partial-handoff content is preserved as historical context for what shipped in #4c and langchain-ai#5. - `rust/docs/phase-1-followups.md` entry langchain-ai#3 §5 + §6: amended to record the approach A decision and the supersession of the per-tick-resync line. - `.omc/plans/langgraph-rust-port-2026-04-30.md` §6 Step 1.2b+1.3 Locked decisions §4 + §5: same amendments, with a pointer to the final handoff doc. What's not changing ------------------- The 5 hard architectural decisions in §6 ("combined milestone", "async runtime: pyo3-async-runtimes", "GIL discipline", "errors via PregelExecutionError::NodeFailed", "channel translation by class name") remain locked. Approach A is the runtime-shape decision that sits *above* those. Test counts ----------- Unchanged — pure docs commit. Latest baseline (HEAD = `c03c7ac6`): * Cargo: 220 passed, clippy clean. * Phase 0: 73/73 + 49 + strict reject; 58 conformance. * Phase 1 + 1.2b foundation + #4c + langchain-ai#5: 129 passed.
Alaina Hardie (trianglegrrl)
added a commit
to trianglegrrl/langgraph
that referenced
this pull request
May 6, 2026
Mechanical, low-risk preparatory commit for sub-step #6b (the `langgraph_rs.backend` Python module + monkeypatch). Sub-task #6a of the combined Step 1.2b + 1.3 milestone (`rust/docs/STEP-1.2B-FINAL-HANDOFF.md`); test counts unchanged. Maturin's default layout puts the compiled extension at the top level (`langgraph_rs.so`), which makes adding a Python sibling file impossible without renaming. #6b needs to add `backend.py` next to the compiled module, so we switch to the `python-source = "python"` layout where the cdylib lives at `langgraph_rs._lib` and a hand-written `__init__.py` re-exports its public surface. What landed ----------- - `rust/ffi/langgraph-py/pyproject.toml`: added `python-source = "python"`, changed `module-name = "langgraph_rs"` to `"langgraph_rs._lib"`. - `rust/ffi/langgraph-py/src/lib.rs`: renamed `#[pymodule] fn langgraph_rs(...)` to `#[pymodule] fn _lib(...)` to match the new module name. - New `rust/ffi/langgraph-py/python/langgraph_rs/__init__.py`: re-exports the 12 public symbols + `__version__` so existing `import langgraph_rs` and `from langgraph_rs import roundtrip` call sites in every parity script keep working unchanged. - `.gitignore`: ignore the in-tree `python/langgraph_rs/_lib*.so` build artifact that `maturin build` drops into the package source tree under the new layout. The wheel under `target/wheels/` is the source of truth. Out of scope ------------ - No backend monkeypatch yet — that's #6b. - No new bridge symbols — `_lib` exposes the exact same surface as the old top-level `langgraph_rs` did. Parity gate ----------- The handoff calls #6a's gate "test counts stay constant across this commit". After rebuilding the wheel and reinstalling into the bridge venv via the canonical procedure, every existing gate is green and equal in count to the pre-commit baseline. What the gate caught -------------------- - Maturin's `python-source` mode copies the compiled `.so` into the source tree as a side effect of `maturin build` (it's the development-import target). The wheel still contains the authoritative copy; the in-tree one is a build artifact and must be gitignored. Caught by the first `git status` after the wheel build. Test counts (unchanged) ----------------------- - Cargo workspace: 220 passed; clippy clean. - Phase 0: 73/73 corpus + 49 allowlist + strict reject; 58 conformance pass / 0 fail. - Phase 1 + 1.2b foundation + #4c + langchain-ai#5: 129 passed.
Alaina Hardie (trianglegrrl)
added a commit
to trianglegrrl/langgraph
that referenced
this pull request
May 6, 2026
…hon scaffolding) Land the Python-side activation surface for the Rust backend. Sub-task #6b of the combined Step 1.2b + 1.3 milestone (`rust/docs/STEP-1.2B-FINAL-HANDOFF.md`); Rust runtime bridge entry point follows in #6c, full 87-test gate is langchain-ai#7. Approach choice (locked in this commit body) -------------------------------------------- The handoff's "approach A — full SyncPregelLoop replacement" left an implementation question: stand up a parallel duck-typed shadow class, or subclass `SyncPregelLoop` and override only the algorithmic core? We chose **subclass + override** after surfacing the trade-off: * Subclass inherits `BackgroundExecutor` / `ExitStack` / lifecycle event queue / status machinery / checkpoint persistence / `accept_push` / `output_writes` / `match_cached_writes` from upstream. None of those is a parity surface we want to grow in Python, and reimplementing 30+ methods just to hit `Pregel.invoke`'s read sites is the wrong cost shape for V0.1. * Approach A's distinguishing property over B — "Rust state lives across the whole graph execution; no per-tick *bidirectional* re-sync" — is preserved by syncing once at `__enter__` (Python channels → Rust) and once at `__exit__` (Rust → Python channels). A subclass that overrides the algorithmic core (`__enter__` validation + Rust seed, `tick`/`after_tick` per-superstep BSP work, `__exit__` flush) gets approach A's behaviour with significantly less Python-side parity surface than a duck-typed shadow. What landed ----------- - `rust/ffi/langgraph-py/python/langgraph_rs/backend.py` — the activation module: * `REPLACED_SYMBOLS` tuple (top-of-file, auditable diff against upstream): both `langgraph.pregel._loop.{Sync,Async}PregelLoop` AND `langgraph.pregel.main.{Sync,Async}PregelLoop`. The second pair is load-bearing — `pregel/main.py` imports the loop classes directly (`from langgraph.pregel._loop import SyncPregelLoop, AsyncPregelLoop`), so the runtime call sites at `main.py:2847` (sync) and `main.py:3299` (async) read the local module attribute. A single-namespace patch leaves `Pregel.stream` / `astream` instantiating the upstream class. Caught by smoke-test langchain-ai#3 below. * `_RustSyncPregelLoop(SyncPregelLoop)` — subclass with overrides: - `__enter__` calls `super().__enter__()`, then iterates `self.channels` validating each via `_channel_translate.class_name` (raises `RustBackendUnsupported` for custom or subclassed channels); then rejects unsupported `interrupt_before` / `interrupt_after` / non-`{values,updates}` stream modes. - `tick()` raises `NotImplementedError` pointing at sub-step #6c until the Rust runtime bridge entry point lands. Auditably non-functional rather than silent fallback. * `_RustAsyncPregelLoop(AsyncPregelLoop)` — subclass that allows construction (so `Pregel.astream`'s `async with` setup doesn't crash before the rejection) but raises `RustBackendUnsupported` from `__aenter__`. Async parity is a deferred follow-up after the 87-test sync gate is green. * `_install_monkeypatches()` is idempotent and gated by `LANGGRAPH_BACKEND=rust` at import time. `is_active()` exposes the install state to tests. - `rust/ffi/langgraph-py/python/langgraph_rs/_channel_translate.py` — moved (via `git mv`) from `parity/scripts/_channel_translate.py`. Production backend code shouldn't depend on parity-test infrastructure; the helper now lives in the package and the parity test imports it from there. (Functional contents unchanged.) - `parity/scripts/test_channel_translate.py` — import switched from the `sys.path.insert(...) + from _channel_translate import ...` shim to `from langgraph_rs._channel_translate import ...`. 35 tests pass unchanged. - `parity/scripts/test_backend_activation.py` — new, 10 smoke tests pinning #6b's surface (replacement list, both-namespace patch, subclass MRO, idempotency, every rejection site, `#6c` stub pointer, async-`__aenter__` rejection). - `conftest.py` (project root) — top-level pytest hook that imports `langgraph_rs.backend` when `LANGGRAPH_BACKEND=rust` is set. Lives at the repo root so it covers both `parity/scripts/` and `libs/langgraph/tests/` (the 87-test gate's home for langchain-ai#7); does nothing without the env var. Out of scope / explicitly rejected (`RustBackendUnsupported`) ------------------------------------------------------------- - Custom user-defined channel classes (anything not in the 10 stdlib set surfaced by `langgraph_rs._channel_translate`). - `interrupt_before` / `interrupt_after` (V0.1 deliberately excludes interrupts; the 87-test filter excludes them via `-k "not interrupt"` but a filter slip surfaces here). - Stream modes outside `values` and `updates`. - Async (`AsyncPregelLoop`) — symbol replaced symmetrically but rejects at `__aenter__`. Out of scope / deferred to #6c ------------------------------ - Subgraphs / `Send` / nested-`Pregel` rejection. Those need per-node introspection at `__enter__` time; deferred to #6c alongside the actual Rust call so we don't grow validation that isn't yet exercised. - The actual Rust call. `tick()` raises `NotImplementedError` pointing at #6c. Activating `LANGGRAPH_BACKEND=rust` and invoking any graph that passes the rejection sites will fail predictably. Parity gate ----------- - Without `LANGGRAPH_BACKEND=rust`: every existing parity gate unchanged in count and result. Importing the module is a no-op, upstream `SyncPregelLoop`/`AsyncPregelLoop` symbols untouched. - With `LANGGRAPH_BACKEND=rust`: 10 new smoke tests in `test_backend_activation.py` pin both the replacement surface and the rejection paths. No graph actually runs end-to-end — that's #6c. What the gate caught -------------------- 1. Single-namespace monkeypatch is insufficient. First wiring patched only `langgraph.pregel._loop.{Sync,Async}PregelLoop`; activating the env var and calling `graph.invoke(...)` did NOT raise the #6c `NotImplementedError` because `pregel/main.py` had imported the class directly into its module namespace at langgraph load time, and the `with SyncPregelLoop(...)` call site read the local reference. Fixed by extending `_install_monkeypatches` to patch `langgraph.pregel.main` as well, and pinning that in `REPLACED_SYMBOLS`. 2. `_channel_translate.py` was reachable from the parity tests via a `sys.path.insert(...)` shim, but the backend module needs it as a real package import. Moved into `langgraph_rs/` so production code doesn't depend on `parity/scripts/` layout. 3. The async test originally tried to introspect docstrings on the override; that was over-engineered and brittle. Replaced with a direct `asyncio.run(instance.__aenter__())` pytest.raises check. Test counts ----------- - Cargo workspace: 220 passed; clippy clean. No Rust changes. - Phase 0: 73/73 corpus + 49 allowlist + strict reject; 58 conformance pass / 0 fail. - Phase 1 + 1.2b foundation + #4c + langchain-ai#5 + #6a: 129 passed (unchanged; no LANGGRAPH_BACKEND set). - Phase 1 + 1.2b foundation + #4c + langchain-ai#5 + #6a + #6b: 139 passed (+10 backend activation tests).
Alaina Hardie (trianglegrrl)
added a commit
to trianglegrrl/langgraph
that referenced
this pull request
May 6, 2026
…oint Land the Rust runtime bridge entry point that drives the Pregel loop end-to-end for an arbitrary Python ``Pregel`` topology, plus the ``_RustSyncPregelLoop.tick`` wiring that calls into it. Sub-task #6c of the combined Step 1.2b + 1.3 milestone (final non-langchain-ai#7 sub-task per ``rust/docs/STEP-1.2B-FINAL-HANDOFF.md``); the 87-test gate (langchain-ai#7) follows. What landed ----------- - ``rust/ffi/langgraph-py/src/pregel_topology.rs`` (new) — bridge fn ``run_pregel_loop_topology`` that takes a Python topology (per-channel ``{class, init, state}``, per-node ``{triggers, reads}``, plus a ``dict[node_name, Python callable]``) and runs the loop end-to-end. Returns msgpack ``{final_state, trace, step, status}``. Run-to-completion: the Python subclass calls this once at first ``tick``; the upstream ``while loop.tick():`` body runs zero iterations (Rust already invoked every node via the python_node callback wrapper). ``NodeFailed`` errors fish the original ``PyErr`` out of the ``PyErrStash`` and re-raise so ``pytest.raises(ValueError)`` semantics carry across the boundary. - ``rust/ffi/langgraph-py/src/channel_translate.rs`` — refactored to expose ``build_channel_kind(class, init, state) -> Box<dyn ChannelKind>`` and ``extract_state_from_channel(class, &chan) -> Value``. The runtime path (sub-step #6c) needs to *keep* the constructed channel and run the loop against it; the legacy round-trip path keeps its bytes-in/bytes-out shape. ``BinaryOperatorAggregate`` runtime channels use the panic-stub binop — the actual fold runs in Python via the node callback wrapper. Native binops are deferred to a langchain-ai#7 follow-up once a failing test demands it. - ``rust/crates/langgraph-core/src/pregel/loop_.rs`` — added ``pub fn step()``, ``pub fn stop()``, and ``pub fn iter_channels()`` so the bridge crate can populate the run-result envelope (out-of-steps detection, final-state extraction). All existing ``pub(crate)`` field invariants kept; the accessors return references / copies of primitive fields. - ``rust/ffi/langgraph-py/src/lib.rs`` — wired ``run_pregel_loop_topology`` into the PyO3 module surface. - ``rust/ffi/langgraph-py/python/langgraph_rs/backend.py`` — ``_RustSyncPregelLoop.tick`` now calls into Rust on first invocation: * Builds per-node Python wrappers that bridge between Rust's ``NodeInput`` (single value or dict) and the upstream PregelNode pipeline (mapper → bound → writers). Captures channel writes via ``CONFIG_KEY_SEND``; provides ``CONFIG_KEY_READ`` over a local-state shadow seeded from the read input and updated as writes flow, so conditional edges that read channels the node already wrote (the common ``add_conditional_edges`` pattern in state graphs) see fresh values. Passes ``CONFIG_KEY_TASK_ID`` so writers don't blow up on missing-key lookups. * Packs channel specs (class + init args + encoded state) and node specs (triggers + reads) for the bridge. * Decodes the result envelope, applies ``final_state`` back to ``self.channels`` via ``_channel_translate.apply_state`` so upstream's ``__exit__`` ``read_channels(self.channels, self.output_keys)`` sees the post-loop values. * Sets ``self.tasks = {}`` so the upstream ``runner.tick(loop.tasks.values())`` loop runs zero iterations. * Sets ``self._put_checkpoint_fut`` to a completed Future so the ``durability == "sync"`` path's ``.result()`` doesn't hang. * Emits a final ``values`` stream chunk so ``Pregel.invoke``'s stream consumer sees the final state. - ``parity/scripts/test_backend_activation.py`` — replaced the ``#6c stub`` test with three end-to-end smoke tests: ``test_trivial_graph_round_trips_through_rust`` (single-node incr), ``test_two_node_chain_round_trips_through_rust`` (incr → doub), ``test_conditional_fork_round_trips_through_rust`` (router → conditional → evens|odds). Each is a minimal parity claim: same input through the Rust backend produces the same final state as upstream Python. Out of scope / known-incomplete (handed to langchain-ai#7) ---------------------------------------------- - ``BinaryOperatorAggregate`` native binops. Rust uses the panic-stub; folds run via the Python node callback. The actual ``operator.add`` / ``messages.add_messages`` reducers fire from Python, not Rust. When a langchain-ai#7 test surfaces a case the panic-stub hits, ``build_channel_kind`` will gain a named-reducer dispatch or a Python-callback binop wrapper. - Branches that read channels the node didn't write/read (the ``CONFIG_KEY_READ`` shadow only carries per-node-local state). Most ``add_conditional_edges`` patterns are local-state only; cross-channel reads are a langchain-ai#7 widening target. - ``BackgroundExecutor`` parallelism within a superstep. The Rust loop currently invokes nodes serially. Upstream's ``runner.tick`` parallelism doesn't apply because we never enter that code path under the Rust backend; if a langchain-ai#7 test exercises parallel-write semantics that the serial Rust order doesn't match, that's a langchain-ai#7 widening target too. - Topic / NamedBarrier / DeltaChannel runtime paths. The ``build_channel_kind`` dispatch covers all 10 stdlib classes, but only ``LastValue``-class channels are exercised by the smoke tests; the others land in langchain-ai#7's full gate. Parity gate ----------- - ``parity/scripts/test_backend_activation.py`` — 12 tests (10 #6b activation + replacement + 2 #6c smoke). All pass with ``LANGGRAPH_BACKEND=rust`` and ``_install_monkeypatches()`` active. - All pre-existing parity gates remain green (no env var, no monkeypatch, upstream Python loop unchanged). What the gate caught -------------------- 1. ``langgraph.pregel._loop.SyncPregelLoop`` import-site coverage: already nailed in #6b but worth re-pinning — without patching ``langgraph.pregel.main`` too, ``Pregel.stream``'s ``with SyncPregelLoop(...)`` constructor reads the local module reference, not the canonical one. 2. ``py_result_to_writes`` in ``python_node.rs`` extracts ``(String, Bound<PyAny>)`` tuples — Python lists don't match. The wrapper had to return tuples not lists. ``TypeError: 'list' object is not an instance of 'tuple'`` was the catch. 3. The conditional-edge ``Branch._route`` calls ``reader(config)`` which dereferences ``config[CONF][CONFIG_KEY_READ]``. Without that key the call dies with ``RuntimeError: Not configured with a read function`` from upstream's ``do_read``. Wired a local- state-shadow reader into the wrapper config. 4. ``PregelLoop`` fields are ``pub(crate)`` — the bridge crate needs read accessors. Added ``step()`` / ``stop()`` / ``iter_channels()`` on the loop with explicit doc-comments linking to #6c. 5. ``checkpoint_value`` is the trait method, not ``checkpoint_json`` (the latter doesn't exist). Caught at compile. Test counts ----------- - Cargo workspace: 220 passed; clippy clean. - Phase 0: 73/73 corpus + 49 allowlist + strict reject; 58 conformance pass / 0 fail. - Phase 1 + 1.2b foundation + #4c + langchain-ai#5 + #6a + #6b + #6c: 141 passed (+2 vs #6b: ``test_two_node_chain_round_trips_through_rust`` and ``test_conditional_fork_round_trips_through_rust``).
Alaina Hardie (trianglegrrl)
added a commit
to trianglegrrl/langgraph
that referenced
this pull request
May 6, 2026
…nly gate green Closes the combined Step 1.2b + 1.3 milestone. The ``LANGGRAPH_BACKEND=rust`` filter on ``libs/langgraph/tests/test_pregel.py`` matches **81 tests** (the handoff's "87" estimate was written before the test set drifted; the ``-k "memory and not streaming and not interrupt and not subgraph and not send"`` filter is verbatim). All 81 pass on first run after sub-step #6c landed — no triage iteration was needed. What landed ----------- - ``parity/scripts/run_87_test_gate.sh`` — runnable wrapper that sets ``NO_DOCKER=true`` (skips redis/postgres fixtures the bridge venv doesn't carry) and ``LANGGRAPH_BACKEND=rust``, points pytest at the filter, and forwards extra args. Single command for re-running the gate locally. - ``rust/ffi/langgraph-py/pyproject.toml`` — added ``[gate-87]`` dependency-group capturing the four collection-time deps the upstream conftest pulls in (``redis``, ``pytest-mock``, ``syrupy``, ``pycryptodome``). The bridge venv was missing these because the set is what ``libs/langgraph/.venv`` carries for its own test suite, not what the bridge needs for codec parity. Documenting in the dependency-group keeps the install command self-describing (``uv pip install --group gate-87``). - ``rust/docs/phase-1-followups.md`` — entry langchain-ai#3 (async PyO3 bridge + ``LANGGRAPH_BACKEND=rust`` wiring) marked **closed**. Added the amendment note that the implementation chose subclass + override for ``_RustSyncPregelLoop`` (rather than the literal stand-alone duck-typed shadow class the prose example sketched), with the rationale matching the design discussion at the start of #6b. The async surface stays deferred — ``_RustAsyncPregelLoop`` raises at ``__aenter__`` until a phase that needs streaming / ``astream`` parity owns it. Bridge-venv setup deltas (one-time, since this commit) ------------------------------------------------------ - ``redis``, ``pytest-mock``, ``syrupy``, ``pycryptodome`` installed via the new ``gate-87`` dependency group. - ``libs/checkpoint-sqlite`` and ``libs/checkpoint-postgres`` installed in editable mode so the conftest can import ``langgraph.cache.sqlite``. (The other libs were already editable-installed by Phase 0.) Parity gate (the milestone gate) -------------------------------- :: NO_DOCKER=true LANGGRAPH_BACKEND=rust \ rust/ffi/langgraph-py/.venv/bin/python -m pytest \ libs/langgraph/tests/test_pregel.py \ -k "memory and not streaming and not interrupt and not subgraph and not send" Result: ``81 passed, 376 deselected in 9.98s``. Sanity check confirmed the Rust runtime is genuinely driving the loop (not a silent fallback to upstream Python): instrumenting ``run_pregel_loop_topology`` with a call counter shows it's invoked on every ``graph.invoke`` under ``LANGGRAPH_BACKEND=rust``, both ``langgraph.pregel._loop`` and ``langgraph.pregel.main`` namespaces resolve ``SyncPregelLoop`` to ``_RustSyncPregelLoop``, and ``backend.is_active()`` returns ``True``. What the gate caught -------------------- Nothing. The 81 tests passed on first run after the bridge wheel was rebuilt with #6c and the bridge venv had its collection-time deps installed. The handoff explicitly warned to "expect failures to send you back to #4c / langchain-ai#5 / langchain-ai#6 for incremental fixes"; that budget went unused. Plausible reasons: 1. The four channel-translation rejection sites (``CONFIG_KEY_READ`` shadow, panic-stub binop, custom-channel gate, async ``__aenter__``) cleanly cover the corners that would have been the most likely failure surfaces. The 87-test filter ``-k`` excludes the patterns those rejections would trip on (``streaming``, ``interrupt``, ``subgraph``, ``send``). 2. The local-state shadow ``CONFIG_KEY_READ`` reader the wrapper provides is enough for every conditional-edge test in the filter — none of them read channels the routing node didn't write. 3. The translation surface from sub-step #4c (1,200+ hypothesis iterations across the 10 stdlib channel classes) was already verified, so the per-class state encoding round-trips cleanly under load. Test counts ----------- - Cargo workspace: 220 passed; clippy clean. - Phase 0: 73/73 corpus + 49 allowlist + strict reject; 58 conformance pass / 0 fail. - Phase 1 + 1.2b foundation + #4c + langchain-ai#5 + #6a + #6b + #6c (LANGGRAPH_BACKEND unset): 141 passed. - **Combined Step 1.2b + 1.3 milestone gate (LANGGRAPH_BACKEND=rust): 81 passed / 0 failed.**
8 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.