Skip to content

feat(memory): harden grounded REM extraction#63297

Merged
mbelinky merged 4 commits intomainfrom
mariano/memory-grounded-extractor
Apr 8, 2026
Merged

feat(memory): harden grounded REM extraction#63297
mbelinky merged 4 commits intomainfrom
mariano/memory-grounded-extractor

Conversation

@mbelinky
Copy link
Copy Markdown
Contributor

@mbelinky mbelinky commented Apr 8, 2026

Summary

  • Problem: the grounded backfill lane still over-read noisy technical days and kept some multi-fact person lines as bundled memory candidates.
  • Why it matters: without stricter extraction, the diary/backfill output stays too loggy to trust as future warm-memory evidence.
  • What changed: tightened grounded What Happened selection, suppressed monitoring sludge, split durable subfacts out of multi-fact candidate lines, and made reflections more relationship-aware when that is the real signal.
  • What did NOT change (scope boundary): this PR does not change the live promotion lane yet and does not add UI.

Change Type (select all)

  • Feature
  • Bug fix
  • Refactor required for the fix
  • Docs
  • Security hardening
  • Chore/infra

Scope (select all touched areas)

  • Memory / storage
  • Gateway / orchestration
  • Skills / tool execution
  • Auth / tokens
  • Integrations
  • API / contracts
  • UI / DX
  • CI/CD / infra

Linked Issue/PR

Root Cause / Regression History (if applicable)

  • Root cause: the base backfill lane extracted grounded summaries, but it still treated some monitoring/logistics residue and bundled person lines as if they were equally memory-worthy.
  • Missing detection / guardrail: the initial lane had no targeted tests for noisy operational days or multi-fact relationship lines.
  • Prior context (git blame, prior PR, issue, or refactor if known): follow-up hardening on top of feat(memory): add grounded REM backfill lane #63273.
  • Why this regressed now: N/A
  • If unknown, what was ruled out: N/A

Regression Test Plan (if applicable)

  • Coverage level that should have caught this:
    • Unit test
    • Seam / integration test
    • End-to-end test
    • Existing coverage already sufficient
  • Target test or file: extensions/memory-core/src/cli.test.ts
  • Scenario the test should lock in: persistence-shaped fact selection, monitoring-day suppression, and atomic durable-candidate extraction from mixed person lines.
  • Why this is the smallest reliable guardrail: the grounded extractor behavior is exercised directly through the CLI preview path.
  • Existing test that already covers this (if any): base backfill tests live in feat(memory): add grounded REM backfill lane #63273.
  • If no new test is added, why not: N/A

User-visible / Behavior Changes

  • What Happened prefers more persistence-shaped grounded facts.
  • monitoring-heavy days stop surfacing alert sludge as memory facts.
  • Candidates and Possible Lasting Updates can split durable subfacts out of mixed person lines.
  • reflections become less generic on relationship-heavy days.

Diagram (if applicable)

Before:
[daily note] -> [grounded backfill] -> [bundled/noisy facts]

After:
[daily note] -> [grounded backfill] -> [durable facts + cleaner candidates + sharper reflections]

Security Impact (required)

  • New permissions/capabilities? (Yes/No) No
  • Secrets/tokens handling changed? (Yes/No) No
  • New/changed network calls? (Yes/No) No
  • Command/tool execution surface changed? (Yes/No) No
  • Data access scope changed? (Yes/No) No
  • If any Yes, explain risk + mitigation:

Repro + Verification

Environment

  • OS: macOS authoring, mb-server for gates
  • Runtime/container: Node 22 / Vitest 4
  • Model/provider: N/A
  • Integration/channel (if any): N/A
  • Relevant config (redacted): default memory workspace layout

Steps

  1. Run openclaw memory rem-harness --json --grounded --path <history-dir> on noisy and relationship-heavy day files.
  2. Inspect What Happened, Reflections, and Candidates.

Expected

  • noisy operational days stay sparse
  • durable relationship facts survive without dragging transient venue/travel details into memory candidates

Actual

  • Matches expected on the focused gate scenarios.

Evidence

  • Failing test/log before + passing after
  • Trace/log snippets
  • Screenshot/recording
  • Perf numbers (if relevant)

Human Verification (required)

  • Verified scenarios: noisy monitoring day suppression, persistence-weighted fact selection, atomic durable-candidate extraction.
  • Edge cases checked: multi-fact relationship lines, ordering-insensitive durable candidate assertions.
  • What you did not verify: UI rendering; that remains in a separate stacked PR.

Review Conversations

  • I replied to or resolved every bot review conversation I addressed in this PR.
  • I left unresolved only the conversations that still need reviewer or maintainer judgment.

Compatibility / Migration

  • Backward compatible? (Yes/No) Yes
  • Config/env changes? (Yes/No) No
  • Migration needed? (Yes/No) No
  • If yes, exact upgrade steps:

Risks and Mitigations

  • Risk: overly strict extraction could under-surface one-off but real facts.
    • Mitigation: this ships as a focused stacked hardening PR on top of the reversible backfill foundation, so we can tune it further before feeding live promotion.

@aisle-research-bot
Copy link
Copy Markdown

aisle-research-bot Bot commented Apr 8, 2026

🔒 Aisle Security Analysis

We found 4 potential security issue(s) in this PR:

# Severity Title
1 🟠 High Potential secret/credential persistence in REM grounded evidence extraction
2 🟡 Medium Algorithmic complexity DoS in atomizeClaimText clause splitting
3 🟡 Medium Unbounded markdown traversal and heavy regex scoring can cause CPU/memory DoS
4 🟡 Medium PII retention amplification by atomizing person/relationship lines into multiple durable memory candidates
1. 🟠 Potential secret/credential persistence in REM grounded evidence extraction
Property Value
Severity High
CWE CWE-200
Location extensions/memory-core/src/rem-evidence.ts:491-555

Description

The grounded REM extraction logic can promote and persist lines containing secrets/credentials into durable memory surfaces (e.g., REM backfill diary entries), due to the interaction between the new monitoring-suppression filter and the durable-signal bypass.

Key points:

  • REM_MONITORING_SIGNAL_RE matches security-sensitive terms like passkey, credential, and password in bws.
  • chooseFactSnippets / chooseCandidateSnippets suppress monitoring-heavy snippets unless isDurableSignalSnippet() returns true.
  • isDurableSignalSnippet() returns true for preference/persistence/person signals (e.g., prefers, remember, partner/girlfriend patterns) and does not check for secrets/credentials.
  • As a result, a single line mixing a durable signal with a secret (e.g., "prefers using password in bws: ") will pass the monitoring filter and can be included in:
    • renderedMarkdown output, and
    • memory rem-backfill, which converts renderedMarkdown into diary lines and writes them to DREAMS.md (durable storage).
  • There is no redaction/sanitization of secrets before rendering or writing (groundedMarkdownToDiaryLines is a simple transform).

Vulnerable code paths:

  • Monitoring bypass filter:
.filter(
  (entry) =>
    !REM_MONITORING_SIGNAL_RE.test(`${section.title} ${entry.snippet.text}`) ||
    isDurableSignalSnippet(entry.snippet.text, section.title),
)
  • Durable-signal check does not exclude secrets:
return (
  REM_MEMORY_SIGNAL_RE.test(text) ||
  REM_PERSISTENCE_SIGNAL_RE.test(text) ||
  REM_EXPLICIT_PREFERENCE_SIGNAL_RE.test(text) ||
  REM_STABLE_PERSON_SIGNAL_RE.test(`${title} ${text}`) ||
  REM_PERSON_PATTERN_SIGNAL_RE.test(text)
);
  • Persistence sink (writes grounded output into backfill diary entries): groundedMarkdownToDiaryLines(file.renderedMarkdown) followed by writeBackfillDiaryEntries(...).

Impact:

  • Secrets/credentials embedded in operational logs or preference notes may be unintentionally surfaced in CLI output and persisted into long-term memory artifacts (e.g., DREAMS.md), increasing the blast radius of accidental secret inclusion in markdown notes.

Recommendation

Add explicit secret/credential detection and ensure it is never eligible for promotion or persistence, regardless of "durable" signals.

Recommended changes:

  1. Introduce a REM_SECRET_SIGNAL_RE (or integrate with existing secret-scanning utilities) that detects common secret patterns (password assignments, API keys, bearer tokens, private keys, etc.).
  2. Apply it as a hard filter before scoring/promotion, and/or redact matched substrings before rendering/writing.
  3. Ensure both sinks are protected:
    • renderedMarkdown generation in previewGroundedRemForFile
    • backfill persistence path in runMemoryRemBackfill

Example (hard block):

const REM_SECRET_SIGNAL_RE = /\b(password|passkey|credential|api[_-]?key|bearer\s+[a-z0-9._-]+|secret)\b/i;

function containsSecret(text: string, title: string): boolean {
  return REM_SECRET_SIGNAL_RE.test(`${title} ${text}`);
}// In chooseFactSnippets / chooseCandidateSnippets
.filter((entry) => !containsSecret(entry.snippet.text, section.title))

Example (redaction, if you must keep context):

function redactSecrets(value: string): string {
  return value
    .replace(/(bearer\s+)[a-z0-9._-]+/gi, "$1[REDACTED]")
    .replace(/(password\s*[:=]\s*)\S+/gi, "$1[REDACTED]");
}

Also add tests that include lines combining durable signals (e.g., "prefers") with credentials/tokens and assert they are excluded or redacted.

2. 🟡 Algorithmic complexity DoS in atomizeClaimText clause splitting
Property Value
Severity Medium
CWE CWE-400
Location extensions/memory-core/src/rem-evidence.ts:592-605

Description

The claim atomization logic can exhibit quadratic-time behavior on long, delimiter-heavy input lines.

  • splitTopLevelClauses() repeatedly calls findTopLevelDelimiter() on the remaining suffix (rest).
  • findTopLevelDelimiter() scans the input string linearly.
  • For input containing many top-level delimiters (e.g., thousands of ; characters), the loop becomes O(n^2) (full scan of progressively shorter suffixes), and also performs repeated slice() allocations.
  • atomizeClaimText() calls splitTopLevelClauses() on untrusted markdown content (files collected from inputPaths and read fully with fs.readFile()), with no file size / line length / delimiter count limits, enabling a local or synced-note attacker to cause significant CPU and memory consumption when the CLI/agent scans notes.

Vulnerable code:

function splitTopLevelClauses(text: string, delimiter: string): string[] {
  const parts: string[] = [];
  let rest = text;
  while (rest.length > 0) {
    const splitAt = findTopLevelDelimiter(rest, delimiter);
    if (splitAt < 0) {
      parts.push(rest);
      break;
    }
    parts.push(rest.slice(0, splitAt));
    rest = rest.slice(splitAt + 1);
  }
  return parts.map((part) => normalizeWhitespace(part)).filter(Boolean);
}

Recommendation

Avoid repeated rescans of the remaining string.

Option A (single pass): scan once and record delimiter positions at top level, then slice once.

function splitTopLevelClauses(text: string, delimiter: string): string[] {
  const parts: string[] = [];
  let roundDepth = 0;
  let squareDepth = 0;
  let last = 0;

  for (let i = 0; i < text.length; i += 1) {
    const c = text[i];
    if (c === "(") roundDepth += 1;
    else if (c === ")") roundDepth = Math.max(0, roundDepth - 1);
    else if (c === "[") squareDepth += 1;
    else if (c === "]") squareDepth = Math.max(0, squareDepth - 1);
    else if (c === delimiter && roundDepth === 0 && squareDepth === 0) {
      parts.push(text.slice(last, i));
      last = i + 1;
      if (parts.length >= 3) break; // if you only ever use the first few atoms
    }
  }

  if (last <= text.length && parts.length < 3) {
    parts.push(text.slice(last));
  }

  return parts.map((p) => normalizeWhitespace(p)).filter(Boolean);
}

Option B (limits): enforce maximum file size / maximum line length / maximum delimiter splits processed (e.g., stop after N splits or after M characters) before calling atomization.

This prevents a crafted markdown file from causing excessive CPU/memory usage during scanning.

3. 🟡 Unbounded markdown traversal and heavy regex scoring can cause CPU/memory DoS
Property Value
Severity Medium
CWE CWE-400
Location extensions/memory-core/src/rem-evidence.ts:1018-1055

Description

The previewGroundedRemMarkdown pipeline performs unbounded recursive directory traversal and then loads and analyzes the entire contents of every discovered .md file, applying many regex tests and extra per-snippet work (atomizeClaimText + repeated scoring).

This creates a denial-of-service risk when inputPaths can point at very large directory trees or extremely large markdown files:

  • collectMarkdownFiles() recursively walks all directories under each inputPath with no depth, count, or size limits.
  • previewGroundedRemMarkdown() reads each file fully into memory via fs.readFile(..., "utf-8").
  • previewGroundedRemForFile() then repeatedly runs multiple regexes (including the new long-alternation REM_MONITORING_SIGNAL_RE) across all snippets/sections and expands work by atomizing claims.

Even without catastrophic backtracking, this is enough to cause CLI/UI lockups or high memory usage when processing untrusted or accidentally huge workspaces.

Recommendation

Add explicit resource limits and early exits so large/untrusted inputs cannot exhaust CPU/RAM.

Suggested mitigations (choose appropriate defaults for your environment):

  1. Limit traversal
  • Maximum directory depth
  • Maximum number of files
  • Skip common large/irrelevant directories (e.g. node_modules, .git)
  1. Limit file size before reading

  2. Cap parsing work

  • Maximum sections/snippets processed per file
  • Short-circuit regex scoring once thresholds are met

Example (size + file-count limits):

const MAX_FILES = 500;
const MAX_BYTES = 1_000_000; // 1MB

if (found.size >= MAX_FILES) return;
...
if (stat.isFile() && resolved.toLowerCase().endsWith('.md')) {
  if (stat.size <= MAX_BYTES) found.add(resolved);
}

If this code can be triggered by remote/untrusted content (e.g., in an extension processing arbitrary workspaces), treat this as a security boundary and enforce conservative limits.

4. 🟡 PII retention amplification by atomizing person/relationship lines into multiple durable memory candidates
Property Value
Severity Medium
CWE CWE-200
Location extensions/memory-core/src/rem-evidence.ts:720-741

Description

The grounded REM extraction now splits a single line into multiple atomic claims (e.g., "Bex — girlfriend, ...""Bex — girlfriend" plus another claim) and emits each claim as its own grounded candidate.

This creates a privacy/PII risk because:

  • Inputs are free-form daily notes (YYYY-MM-DD.md) which can contain personal data (names, relationship status, venues, dates).
  • The new atomizeClaimText()/splitSubjectLeadClaim() logic increases the number of extracted memory items derived from a single source line.
  • Those extracted items are then rendered into grounded markdown and can be persisted into the main workspace via rem-backfill (written into DREAMS/diary entries) and surfaced in CLI output.
  • There is no consent boundary, minimization, or redaction for person-identifying information before persistence; instead, the change explicitly prioritizes relationship facts (see tests asserting "Bunji — partner", "Bex — girlfriend" appear).

Vulnerable flow (privacy amplification):

  • Input: daily note bullet lines (may contain PII)
  • Transformation: atomizeClaimText(snippet.text) produces up to 3 claims
  • Sink/surface: claims become grounded candidates and are rendered/persisted via rem-backfill into diary/DREAMS content

Vulnerable code:

return chooseCandidateSnippets(section, snippets).flatMap((snippet) =>
  atomizeClaimText(snippet.text)
    .map((claim) => {
      const score = scoreCandidateSnippet(claim, section.title);
      const text = buildCandidateSnippetText(section.title, claim);
      return {
        text,
        refs: [makeRef(params.relPath, snippet.line)],
        lean: classifyCandidateLeanFromText(claim, section.title),
        score,
      };
    })
    .filter((candidate) => candidate.text.length >= 12 && candidate.score >= 1.8),
);

While this may be intended behavior, it materially increases the likelihood that sensitive relationship facts become durable, independently retrievable memory entries.

Recommendation

Add privacy minimization controls before persisting/surfacing atomized claims, especially for person-identifying content.

Options (can be combined):

  1. Gate atomization for person/relationship lines unless user opted in:
const isPersonRelationship = REM_STABLE_PERSON_SIGNAL_RE.test(`${section.title} ${snippet.text}`);
const claims = isPersonRelationship && !opts.allowPersonAtomization
  ? [snippet.text]
  : atomizeClaimText(snippet.text);
  1. Redact or downscope relationship labels (partner/girlfriend/etc.) when writing backfill output unless explicitly requested.

  2. Add a dedicated PII classifier (names/emails/phones/addresses) and suppress or require explicit allowlisting before writing to durable stores (e.g., DREAMS diary entries).

  3. Ensure downstream sinks (CLI JSON, logs, exports) respect a --redact/--no-people-memory flag.


Analyzed PR: #63297 at commit e188b7e

Last updated on: 2026-04-08T18:27:48Z

@openclaw-barnacle openclaw-barnacle Bot added extensions: memory-core Extension: memory-core size: M maintainer Maintainer-authored PR labels Apr 8, 2026
@mbelinky mbelinky merged commit 078e7a6 into main Apr 8, 2026
27 of 35 checks passed
@mbelinky mbelinky deleted the mariano/memory-grounded-extractor branch April 8, 2026 18:28
@mbelinky
Copy link
Copy Markdown
Contributor Author

mbelinky commented Apr 8, 2026

Merged via squash.

Thanks @mbelinky!

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e188b7e26d

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +608 to +609
const match = /^(?<subject>.+?(?:—|–|-))\s*(?<rest>.+)$/u.exec(text);
if (!match?.groups) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Limit subject splitting to actual dash separators

Update splitSubjectLeadClaim so it only treats a dash as a subject separator when it appears as a real delimiter (for example, spaced - / ), not any hyphen character. The current pattern matches hyphenated words, so inputs like Use long-term plan, avoid reactive tasks are split into malformed atomic claims (e.g., Use long- term plan), which can then be surfaced in ## Candidates/## Possible Lasting Updates as noisy or corrupted memory candidates.

Useful? React with 👍 / 👎.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 8, 2026

Greptile Summary

This PR hardens the grounded REM backfill lane by tightening What Happened fact selection to prefer persistence-shaped signals over operational noise, suppressing monitoring-heavy days from surfacing alert sludge, splitting multi-fact person lines into atomic durable candidates via atomizeClaimText/splitSubjectLeadClaim, and making reflections relationship-aware when multiple person threads appear in the same day. Three new targeted unit tests cover the main scenarios (persistence preference, monitoring suppression, and person-line atomisation).

Confidence Score: 5/5

Safe to merge; all findings are P2 style suggestions that do not affect correctness.

The implementation is well-layered: monitoring suppression, durable-candidate scoring, and atomisation all have explicit score-threshold guards (1.4 / 1.8) that make borderline splits benign in practice. New tests directly exercise the three targeted scenarios. The only concern is the ASCII-hyphen in splitSubjectLeadClaim's regex, which can produce odd split strings, but they're reliably filtered by the downstream score check.

No files require special attention beyond the minor hyphen-regex note in extensions/memory-core/src/rem-evidence.ts.

Vulnerabilities

No security concerns identified. The PR only modifies in-process text scoring/filtering heuristics and adds unit tests; there are no new network calls, no auth changes, and no new surfaces for injection.

Prompt To Fix All With AI
This is a comment left during a code review.
Path: extensions/memory-core/src/rem-evidence.ts
Line: 607-627

Comment:
**Hyphen in dash character class may split compound words**

`(?:—|–|-)` includes a plain ASCII hyphen `-`, so inputs like `"non-trivial claim, worth noting"` or `"self-aware, sometimes unreliable"` will match at the first hyphen: `subject = "non-"`, `rest = "trivial claim, worth noting"`. The `first.length < 3` guard (`"trivial claim"` is well over 3 chars) and `remainder.length < 6` don't block this, so it would return `["non- trivial claim", "non- worth noting"]` — syntactically odd but harmless only because the score filter tends to reject them.

If you want to restrict splitting to semantic dashes only, consider excluding the plain hyphen:

```suggestion
function splitSubjectLeadClaim(text: string): string[] {
  const match = /^(?<subject>.+?(?:—|–))\s*(?<rest>.+)$/u.exec(text);
```

Alternatively, require at least one whitespace around the hyphen when it's an ASCII dash: `(?:—|–|\s+-\s+)`. Either way, the current behaviour is low-risk because of the downstream score threshold, but it's easy to tighten.

How can I resolve this? If you propose a fix, please make it concise.

Reviews (1): Last reviewed commit: "changelog: document grounded diary extra..." | Re-trigger Greptile

Comment on lines +607 to +627
function splitSubjectLeadClaim(text: string): string[] {
const match = /^(?<subject>.+?(?:—|–|-))\s*(?<rest>.+)$/u.exec(text);
if (!match?.groups) {
return [text];
}
const subject = normalizeWhitespace(match.groups.subject);
const rest = normalizeWhitespace(match.groups.rest);
if (!subject || !rest) {
return [text];
}
const commaIndex = findTopLevelDelimiter(rest, ",");
if (commaIndex < 0) {
return [text];
}
const first = normalizeWhitespace(rest.slice(0, commaIndex));
const remainder = normalizeWhitespace(rest.slice(commaIndex + 1));
if (first.length < 3 || remainder.length < 6) {
return [text];
}
return [`${subject} ${first}`, `${subject} ${remainder}`];
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Hyphen in dash character class may split compound words

(?:—|–|-) includes a plain ASCII hyphen -, so inputs like "non-trivial claim, worth noting" or "self-aware, sometimes unreliable" will match at the first hyphen: subject = "non-", rest = "trivial claim, worth noting". The first.length < 3 guard ("trivial claim" is well over 3 chars) and remainder.length < 6 don't block this, so it would return ["non- trivial claim", "non- worth noting"] — syntactically odd but harmless only because the score filter tends to reject them.

If you want to restrict splitting to semantic dashes only, consider excluding the plain hyphen:

Suggested change
function splitSubjectLeadClaim(text: string): string[] {
const match = /^(?<subject>.+?(?:||-))\s*(?<rest>.+)$/u.exec(text);
if (!match?.groups) {
return [text];
}
const subject = normalizeWhitespace(match.groups.subject);
const rest = normalizeWhitespace(match.groups.rest);
if (!subject || !rest) {
return [text];
}
const commaIndex = findTopLevelDelimiter(rest, ",");
if (commaIndex < 0) {
return [text];
}
const first = normalizeWhitespace(rest.slice(0, commaIndex));
const remainder = normalizeWhitespace(rest.slice(commaIndex + 1));
if (first.length < 3 || remainder.length < 6) {
return [text];
}
return [`${subject} ${first}`, `${subject} ${remainder}`];
}
function splitSubjectLeadClaim(text: string): string[] {
const match = /^(?<subject>.+?(?:|))\s*(?<rest>.+)$/u.exec(text);

Alternatively, require at least one whitespace around the hyphen when it's an ASCII dash: (?:—|–|\s+-\s+). Either way, the current behaviour is low-risk because of the downstream score threshold, but it's easy to tighten.

Prompt To Fix With AI
This is a comment left during a code review.
Path: extensions/memory-core/src/rem-evidence.ts
Line: 607-627

Comment:
**Hyphen in dash character class may split compound words**

`(?:—|–|-)` includes a plain ASCII hyphen `-`, so inputs like `"non-trivial claim, worth noting"` or `"self-aware, sometimes unreliable"` will match at the first hyphen: `subject = "non-"`, `rest = "trivial claim, worth noting"`. The `first.length < 3` guard (`"trivial claim"` is well over 3 chars) and `remainder.length < 6` don't block this, so it would return `["non- trivial claim", "non- worth noting"]` — syntactically odd but harmless only because the score filter tends to reject them.

If you want to restrict splitting to semantic dashes only, consider excluding the plain hyphen:

```suggestion
function splitSubjectLeadClaim(text: string): string[] {
  const match = /^(?<subject>.+?(?:—|–))\s*(?<rest>.+)$/u.exec(text);
```

Alternatively, require at least one whitespace around the hyphen when it's an ASCII dash: `(?:—|–|\s+-\s+)`. Either way, the current behaviour is low-risk because of the downstream score threshold, but it's easy to tighten.

How can I resolve this? If you propose a fix, please make it concise.

eleqtrizit pushed a commit that referenced this pull request Apr 8, 2026
Merged via squash.

Prepared head SHA: e188b7e
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Reviewed-by: @mbelinky
greidron added a commit to greidron/openclaw that referenced this pull request Apr 10, 2026
* release: mirror bundled channel deps at root (openclaw#63065)

Merged via squash.

Prepared head SHA: ac26799
Co-authored-by: scoootscooob <167050519+scoootscooob@users.noreply.github.com>
Co-authored-by: scoootscooob <167050519+scoootscooob@users.noreply.github.com>
Reviewed-by: @scoootscooob

* fix(test): keep warn log capture under openclaw temp dir

* revert: undo background alive review findings fix

* feat: add qa character vibes eval

* test: stabilize plugin boundary invariants

* test: isolate agent gateway cli command mocks

* test: skip duplicate package boundary wrapper in ci

* test: fix postpublish verifier sidecar handling

* test: keep status tests off live usage probes

* auto-reply: type status auth overrides

* plugins: read contract inventory from manifests

* test: inline cli metadata channel fixture

* ci: skip duplicate full extension shard

* test: isolate discord directory live token env

* test: keep followup runner memory mock complete

* ci: split parallel full suite into leaf shards

* test: guard loader fixtures against broad sdk imports

* test: keep bundled channel entry smokes descriptor-only

* ci: reduce full suite test parallelism

* test: avoid bundled test api smokes in matrix and telegram

* test: keep discord and irc entry smokes descriptor-only

* test: keep web provider artifact coverage manifest-only

* test: keep provider policy artifact coverage narrow

* test: keep web provider artifact test in boundary

* test: keep status message tests off auth auto-detection

* status: avoid plugin lookup for direct channel model overrides

* channels: fast-path direct model override matches

* test: restore manifest-only web provider coverage

* fix: allow blank TLS manual port default (openclaw#63134) (thanks @Tyler-RNG)

* make port optional for TLS manual connections

* fix: restrict manual blank-port fallback to tls

* fix: allow blank TLS manual port default (openclaw#63134) (thanks @Tyler-RNG)

---------

Co-authored-by: Ayaan Zaidi <hi@obviy.us>

* test: fix full suite CI test isolation

* fix: align LLM idle timeout policy

* test: exercise models json file mode without provider discovery

* test: keep shared dm policy contract off channel facades

* test: keep web provider artifact test in boundary

* test: keep kilocode provider tests on plugin-owned helpers

* ci: restore sequential full suite tests

* test: keep public artifact coverage on cheap boundaries

* test: keep openclaw tools registration tests on a fast shell

* test: keep bundled metadata sidecar scan inventory-only

* docs(inferrs): fix Gemma model id from gg-hf-gg to google (openclaw#62586)

* fix: harden bundled plugin dependency release checks

* ci: isolate full suite leaf shards

* test: keep openclaw tools registration policy pure

* fix: support Codex CLI QA auth

* feat: add QA character eval reports

* docs: document QA character eval workflow

* refactor: dedupe media generation tool helpers

* refactor: dedupe internal helper glue

* refactor: dedupe shared helper branches

* refactor: dedupe browser navigation guard tests

* refactor: dedupe config and subagent tests

* refactor: dedupe test helpers and script warning filter

* refactor: dedupe plugin test harnesses

* refactor: dedupe media runtime test mocks

* refactor: dedupe plugin metadata test helpers

* refactor: dedupe firecrawl and directive helpers

* refactor: dedupe exec defaults tests

* refactor: dedupe approval runtime tests

* refactor: dedupe matrix exec approval tests

* refactor: dedupe telegram exec approval tests

* refactor: dedupe doctor codex oauth tests

* refactor: dedupe agent command test fixtures

* refactor: dedupe embedding provider test fixtures

* refactor: share html entity tool call decoding

* fix: keep minimax provider mocks package-local

* test: keep pdf and update-plan registration tests pure

* test: keep model reasoning override coverage on merge helpers

* fix: default OpenAI reasoning effort to high

* test: keep kimi implicit provider tests on provider catalog

* fix(build): prune stale bundled plugin node_modules

* fix(build): address bundled plugin prune review

* fix(build): honor postinstall disable flag

* test: keep chutes implicit provider tests on provider catalog

* fix(plugin-sdk): export channel plugin base

* docs: reorder changelog entries

* test: keep bundled web-search owner checks on public artifacts

* fix(build): keep tsdown prune best-effort

* test: trust gateway exec fixture node path

* fix: keep runtime task test harness behind task seams

* test: explain gateway exec fixture trust

* Reply: surface OAuth reauth failures (openclaw#63217)

Merged via squash.

Prepared head SHA: 68b7ffd
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Reviewed-by: @mbelinky

* test: make character eval scenario natural

* feat: add character eval model options

* test: keep pi fs workspace tests on fs tool factories

* test: keep media runtime tests on same-directory provider mocks

* fix(android): auto-resume pairing approval

* fix(android): prefer bootstrap auth on qr pairing

* fix(android): reset auth on new setup codes

* fix(android): tighten pairing retry behavior

* fix(android): prefer stored device auth after pairing

* fix: restore android qr pairing flow (openclaw#63199)

* fix(auto-reply): strip leading NO_REPLY tokens to prevent silent-reply leak (openclaw#63068)

* fix(auto-reply): strip leading NO_REPLY tokens to prevent silent-reply leak

* fix(auto-reply): preserve substantive NO_REPLY leading text

* fix(agents): preserve ACP silent-prefix cumulative deltas

* fix(auto-reply): harden silent-token streaming paths

* fix(auto-reply): normalize glued silent tokens consistently

---------

Co-authored-by: termtek <termtek@ubuntu.tail2b72cd.ts.net>

* fix(gateway): clear auto-fallback model override on session reset (openclaw#63155)

* fix(gateway): clear auto-fallback model override on session reset

When `persistFallbackCandidateSelection()` writes a fallback provider
override with `authProfileOverrideSource: "auto"`, the override was
incorrectly preserved across `/reset` and `/new` commands. This caused
sessions to keep using the fallback provider even after the user changed
the agent config primary provider, because the session store override
takes precedence over the config default.

Now the override fields (`providerOverride`, `modelOverride`,
`authProfileOverride`, `authProfileOverrideSource`,
`authProfileOverrideCompactionCount`) are only carried forward when
`authProfileOverrideSource === "user"` (i.e. explicit `/model` command).
System-driven overrides are dropped on reset so the session picks up the
current config default.

Introduced in cb0a752 ("fix: preserve reset session behavior config")

* fix(gateway): preserve explicit reset model selection

* fix(gateway): track reset model override source

* fix(gateway): preserve legacy reset model overrides

* docs(changelog): add session reset merge note

---------

Co-authored-by: termtek <termtek@ubuntu.tail2b72cd.ts.net>

* test: stabilize ci test isolation

* test: isolate volcengine byteplus auth resolver imports

* fix: patch hono security advisories

* fix: pass system prompt to codex cli

* fix(plugins): prevent untrusted workspace plugins from hijacking bundled provider auth choices [AI] (openclaw#62368)

* fix: address issue

* fix: address review feedback

* docs(changelog): add onboarding auth-choice guard entry

* fix: address PR review feedback

* fix: address PR review feedback

* fix: address PR review feedback

* fix: address PR review feedback

* fix: address PR review feedback

* fix: address PR review feedback

* fix: address PR review feedback

* fix: address PR review feedback

---------

Co-authored-by: Devin Robison <drobison@nvidia.com>

* test: isolate provider runtime test mocks

* feat(plugins): support provider auth aliases

* feat(memory): add grounded REM backfill lane (openclaw#63273)

Merged via squash.

Prepared head SHA: 4450f25
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Reviewed-by: @mbelinky

* feat(memory): harden grounded REM extraction (openclaw#63297)

Merged via squash.

Prepared head SHA: e188b7e
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Reviewed-by: @mbelinky

* feat(ui): add dreaming diary controls and navigation (openclaw#63298)

Merged via squash.

Prepared head SHA: 0a2ae66
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Reviewed-by: @mbelinky

* chore(ui): refresh zh-TW control ui locale

* chore(ui): refresh zh-CN control ui locale

* chore(ui): refresh pt-BR control ui locale

* chore(ui): refresh de control ui locale

* chore(ui): refresh es control ui locale

* chore(ui): refresh ko control ui locale

* chore(ui): refresh ja-JP control ui locale

* chore(ui): refresh fr control ui locale

* docs(matrix): tighten setup and config guidance

* chore(ui): refresh tr control ui locale

* chore(ui): refresh uk control ui locale

* chore(ui): refresh pl control ui locale

* chore(ui): refresh id control ui locale

* test: stabilize full-suite execution

* fix(matrix): contain sync outage failures (openclaw#62779)

Merged via squash.

Prepared head SHA: 901bb76
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Reviewed-by: @gumadeiras

* Align remote node exec event system messages with untrusted handling (openclaw#62659)

* fix(nodes): downgrade remote exec system events

* docs(changelog): add remote node exec event entry

---------

Co-authored-by: Devin Robison <drobison@nvidia.com>

* test: reuse image generate tool imports

* test: reuse followup runner imports

* docs(config): tighten wording in reference

* test: harden provider mock isolation

* fix(memory): accept embedded dreaming heartbeat tokens

* test: harden Parallels macOS smoke fallback

* build: narrow plugin SDK declaration build

* fix(dotenv): block workspace runtime env vars (openclaw#62660)

* fix(dotenv): block workspace runtime env vars

Co-authored-by: zsx <git@zsxsoft.com>

* docs(changelog): add workspace dotenv runtime-control entry

* fix(dotenv): block workspace gateway port override

---------

Co-authored-by: zsx <git@zsxsoft.com>
Co-authored-by: Devin Robison <drobison@nvidia.com>

* build: stage nostr runtime dependencies

* fix: load QA live provider overrides

* feat: parallelize character eval runs

* auth: avoid external cli sync on profile upsert

* test(doctor): mock memory-core runtime seam

* auth: persist explicit profile upserts directly

* Matrix: report startup failures as errors

* fix(browser): harden browser control override loading (openclaw#62663)

* fix(browser): harden browser control overrides

* fix(lint): prepare boundary artifacts for extension oxlint

* docs(changelog): add browser override hardening entry

* fix(lint): avoid duplicate boundary prep

---------

Co-authored-by: Devin Robison <drobison@nvidia.com>
Co-authored-by: Devin Robison <drobison00@users.noreply.github.com>

* test: reuse exec directive reply imports

* test: reuse verbose directive reply imports

* fix(browser): re-check interaction-driven navigations (openclaw#63226)

* fix(browser): guard interaction-driven navigations

* fix(browser): avoid rechecking unchanged interaction urls

* fix(browser): guard delayed interaction navigations

* fix(browser): guard interaction-driven navigations for full action duration

* fix(browser): avoid waiting on interaction grace timer

* fix(browser): ignore same-document hash-only URL changes in navigation guard

* fix(browser): dedupe interaction nav guards

* fix(browser): guard same-URL reloads in interaction navigation listeners

* docs(changelog): add interaction navigation guard entry

* fix(browser): drop duplicate ssrfPolicy props

* fix(browser): tighten interaction navigation guards

---------

Co-authored-by: Devin Robison <drobison@nvidia.com>

* test: move directive state coverage to pure tests

* fix: enable thinking support for the ollama api (openclaw#62712)

Merged via squash.

Prepared head SHA: c0b9950
Co-authored-by: hoyyeva <63033505+hoyyeva@users.noreply.github.com>
Co-authored-by: BruceMacD <5853428+BruceMacD@users.noreply.github.com>
Reviewed-by: @BruceMacD

* Slack: treat ACP block text as visible output (openclaw#62858)

Merged via squash.

Prepared head SHA: 14f202e
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Reviewed-by: @gumadeiras

* fix: fail fast on qa live auth errors

* fix: fail fast across qa scenario wait paths

* test: cover qa scenario wait failure replies

* fix: sanitize qa missing-key replies

* test: cover sanitized qa missing-key replies

* fix: align qa wait cursor semantics

* test: cover mixed-traffic qa wait cursors

* fix: classify curated qa missing-key replies

* test: cover curated qa missing-key reply classification

* fix: harden qa missing-key provider messages

* test: cover unsafe qa missing-key providers

* docs(changelog): add qa auth fail-fast entry (openclaw#63333) (thanks @shakkernerd)

* fix(matrix/doctor): migrate legacy channels.matrix.dm.policy 'trusted' (fixes openclaw#62931) (openclaw#62942)

Merged via squash.

Prepared head SHA: d9f553b
Co-authored-by: lukeboyett <46942646+lukeboyett@users.noreply.github.com>
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Reviewed-by: @gumadeiras

* Memory/dreaming: feed grounded backfill into short-term promotion (openclaw#63370)

Merged via squash.

Prepared head SHA: 5dfe246
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Reviewed-by: @mbelinky

* docs: update unreleased changelog

* fix(gateway): classify dream diary actions

* fix(memory): align dreaming status payloads

* Memory/dreaming: harden grounded backfill follow-ups

* test: reuse inline directive reply imports

* Docs/memory: explain grounded backfill flows

* fix(deps): patch basic-ftp advisory

* test: move inline directive collisions to pure tests

* Slack: dedupe partial streaming replies (openclaw#62859)

Merged via squash.

Prepared head SHA: cbecb50
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Reviewed-by: @gumadeiras

* test: replace exec directive e2e with pure coverage

* fix(plugins): keep test helpers out of contract barrels (openclaw#63311)

Merged via squash.

Prepared head SHA: 769e90c
Co-authored-by: altaywtf <9790196+altaywtf@users.noreply.github.com>
Co-authored-by: altaywtf <9790196+altaywtf@users.noreply.github.com>
Reviewed-by: @altaywtf

* test: move cron heartbeat delivery coverage below full turns

* fix: inter-session messages must not overwrite established external lastRoute (openclaw#58013)

Merged via squash.

Prepared head SHA: 820ea20
Co-authored-by: duqaXxX <12242811+duqaXxX@users.noreply.github.com>
Co-authored-by: jalehman <550978+jalehman@users.noreply.github.com>
Reviewed-by: @jalehman

* fix(gateway): suppress announce/reply skip chat leakage (openclaw#51739)

Merged via squash.

Prepared head SHA: 2f53f3b
Co-authored-by: Pinghuachiu <9033138+Pinghuachiu@users.noreply.github.com>
Co-authored-by: jalehman <550978+jalehman@users.noreply.github.com>
Reviewed-by: @jalehman

* Slack: key turn-local dedupe by dispatch kind

Scope Slack turn-local delivery dedupe by reply dispatch kind so identical tool and final payloads on the same thread do not collapse into one send.

Expose the existing dispatcher kind on the public reply-runtime seam and cover the Slack tracker and preview-fallback paths with regression tests.

* Dreaming: surface grounded scene lane (openclaw#63395)

Merged via squash.

Prepared head SHA: 0c7f586
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Reviewed-by: @mbelinky

* test: avoid runtime auth overlays in failure-state coverage

* fix(ci): align ollama thinking expectations

* chore(ui): refresh zh-CN control ui locale

* chore(ui): refresh pt-BR control ui locale

* chore(ui): refresh zh-TW control ui locale

* chore(ui): refresh de control ui locale

* test(docker): reduce e2e log noise

* chore(ui): refresh es control ui locale

* chore(ui): refresh fr control ui locale

* chore(ui): refresh ja-JP control ui locale

* chore(ui): refresh ko control ui locale

* chore(ui): refresh uk control ui locale

* chore(ui): refresh id control ui locale

* chore(ui): refresh pl control ui locale

* chore(ui): refresh tr control ui locale

* fix: restore main ci

* fix(ci): drop silent history before truncation

* docs: reorder unreleased changelog

* test(docker): quiet success-path e2e logs

* style: sort session import

* build: mirror bundled plugin runtime deps

* plugins: load lightweight provider discovery entries

* ci: narrow Windows node test lane

* fix: filter provider auth aliases by plugin trust

* fix: surface delayed browser navigation blocks

* style: format memory and gateway touchups

* Delete docs/plans directory

Unused artifact

* test: avoid remote ollama timeout in api-key preservation coverage

* test: keep auth-choice default-model coverage on lightweight provider

* test: keep undefined-token auth-choice coverage generic

* fix: stabilize character eval and Qwen model routing

* test: keep agent command tests off external auth overlays

* fix openrouter model picker refs (openclaw#63416)

* fix openrouter model picker refs

Signed-off-by: sallyom <somalley@redhat.com>

* test(ui): cover openrouter slash-id /model resolution

---------

Signed-off-by: sallyom <somalley@redhat.com>
Co-authored-by: Vignesh Natarajan <vignesh.natarajan92@gmail.com>

* ci: stabilize macOS and transcript policy tests

* test: keep cli-provider agent command tests off external auth overlays

* chore(lint): clear extension lint regressions and add openclaw#63416 changelog

* test: update modelstudio catalog contract sentinel

* test: update character eval public panel

* fix: repair Windows dev-channel updater

* test: move copilot models-json injection coverage to plan tests

* plugin-sdk: split command status surface

* plugin-sdk: keep command status compatibility path light

* plugin-sdk: drop investigative weixin repro harness

* tests: document config mock choice for eager warmup

* fix: update command-status SDK baseline (openclaw#63174) (thanks @hxy91819)

* test: cap broad live model sweeps

* fix: drop raw gateway chat control replies

* test: make shared-token reload deterministic

* test: isolate agentic suite smoke tests

* test: replace models-config matrix with narrow coverage

* test: isolate onboard skills status mock

* plugins: add lightweight anthropic vertex discovery

* test: isolate model auth module state

* test: isolate subagent registry resume imports

* plugins: keep google provider policy lightweight

* test: keep ollama unreachable discovery on localhost

* test: mock auth profile external overlay in oauth tests

* auth: avoid plugin setup scans during common auth resolution

* fix(logging): break console/logger type cycle

* fix(config): stop owner-display barrel cycles

* fix(commands): split auth choice apply types

* fix(infra): extract exec approvals allowlist types

* fix(commands): split doctor prompt option types

* chore: prepare 2026.4.9-beta.1 release

* chore: refresh config schema version for 2026.4.9-beta.1

* chore: refresh plugin SDK API baseline

* test: run local full suite project shards in parallel

* wizard: add explicit skip option to plugin setup (openclaw#63436)

* Wizard: allow skipping plugin setup

* Agents: reset nodes tool test modules

* tests: reset discord native-command seams in model picker (openclaw#63267)

* ci: tolerate noisy npm pack json output

* test: isolate slack thread-ts recovery

* fix(msteams): isolate channel thread sessions by replyToId (openclaw#58615) (openclaw#62713)

* fix(msteams): isolate thread sessions by replyToId (openclaw#58615)

* fix(msteams): align thread ID extraction + fix test types

* fix(msteams): route thread replies to correct thread via replyToId (openclaw#58030) (openclaw#62715)

* fix(msteams): pin reply target at inbound time to prevent DM/channel leak (openclaw#54520) (openclaw#62716)

* test: keep local full suite serial by default

* chore: prepare 2026.4.9 stable release

* Agents: guard legacy pi transport override

* Agents: restore upstream pi runner sources

---------

Signed-off-by: sallyom <somalley@redhat.com>
Co-authored-by: scoootscooob <zhentongfan@gmail.com>
Co-authored-by: scoootscooob <167050519+scoootscooob@users.noreply.github.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
Co-authored-by: Nimrod Gutman <nimrod.gutman@gmail.com>
Co-authored-by: Tyler Warburton <Ethan.gold-Steinberg@protonmail.com>
Co-authored-by: Ayaan Zaidi <hi@obviy.us>
Co-authored-by: Eric Curtin <eric.curtin@docker.com>
Co-authored-by: Mariano <mbelinky@gmail.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: Frank Yang <frank.ekn@gmail.com>
Co-authored-by: termtek <termtek@ubuntu.tail2b72cd.ts.net>
Co-authored-by: Pavan Kumar Gondhi <pgondhi@nvidia.com>
Co-authored-by: Devin Robison <drobison@nvidia.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Gustavo Madeira Santana <gumadeiras@gmail.com>
Co-authored-by: gumadeiras <5599352+gumadeiras@users.noreply.github.com>
Co-authored-by: Agustin Rivera <31522568+eleqtrizit@users.noreply.github.com>
Co-authored-by: zsx <git@zsxsoft.com>
Co-authored-by: Devin Robison <drobison00@users.noreply.github.com>
Co-authored-by: Eva H <63033505+hoyyeva@users.noreply.github.com>
Co-authored-by: BruceMacD <5853428+BruceMacD@users.noreply.github.com>
Co-authored-by: Shakker <shakkerdroid@gmail.com>
Co-authored-by: lukeboyett <46942646+lukeboyett@users.noreply.github.com>
Co-authored-by: Altay <altay@uinaf.dev>
Co-authored-by: altaywtf <9790196+altaywtf@users.noreply.github.com>
Co-authored-by: Accunza <12242811+duqaXxX@users.noreply.github.com>
Co-authored-by: jalehman <550978+jalehman@users.noreply.github.com>
Co-authored-by: Pinghuachiu <9033138+Pinghuachiu@users.noreply.github.com>
Co-authored-by: Radek Sienkiewicz <mail@velvetshark.com>
Co-authored-by: Sally O'Malley <somalley@redhat.com>
Co-authored-by: Vignesh Natarajan <vignesh.natarajan92@gmail.com>
Co-authored-by: Mason Huang <masonxhuang@tencent.com>
Co-authored-by: Vincent Koc <vincentkoc@ieee.org>
Co-authored-by: pashpashpash <nik@vault77.ai>
Co-authored-by: sudie-codes <suvenkat95@gmail.com>
zhonghe0615 pushed a commit to zhonghe0615/openclaw that referenced this pull request Apr 27, 2026
Merged via squash.

Prepared head SHA: e188b7e
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Reviewed-by: @mbelinky
lovewanwan pushed a commit to lovewanwan/openclaw that referenced this pull request Apr 28, 2026
Merged via squash.

Prepared head SHA: e188b7e
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Reviewed-by: @mbelinky
ogt-redknie pushed a commit to ogt-redknie/OPENX that referenced this pull request May 2, 2026
Merged via squash.

Prepared head SHA: e188b7e
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Co-authored-by: mbelinky <132747814+mbelinky@users.noreply.github.com>
Reviewed-by: @mbelinky
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

extensions: memory-core Extension: memory-core maintainer Maintainer-authored PR size: M

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant