Skip to content

Commit f256eeb

Browse files
authored
fix(active-memory): use bundled recall tool
Fixes #73502. Active Memory now allows its hidden recall sub-agent to use both bundled memory tool contracts: memory_recall for memory-lancedb and memory_search/memory_get for memory-core. The prompt prefers memory_recall when available and falls back to the legacy tool pair when that is the active backend surface. Also updates Active Memory docs, QA mock fixtures, and debug parsing compatibility for the two recall paths.
1 parent dd643c8 commit f256eeb

6 files changed

Lines changed: 63 additions & 52 deletions

File tree

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ Docs: https://docs.openclaw.ai
1313
### Fixes
1414

1515
- Active Memory: allow `allowedChatTypes` to include explicit portal/webchat sessions and classify `agent:...:explicit:...` session keys before opaque session ids can shadow the chat type. Fixes #65775. (#66285) Thanks @Lidang-Jiang.
16+
- Active Memory: allow the hidden recall sub-agent to use both `memory_recall` and the legacy `memory_search`/`memory_get` memory tool contract, so bundled `memory-lancedb` recall works without breaking the default `memory-core` path. Fixes #73502. (#73584) Thanks @Takhoffman.
1617
- fix(device-pairing): validate callerScopes against resolved token scopes on repair [AI]. (#72925) Thanks @pgondhi987.
1718
- Active Memory docs: document the `cacheTtlMs` 1000-120000 ms range and 15000 ms default so setup snippets do not lead users past the schema limit. Fixes #65708. (#65737) Thanks @WuKongAI-CMU.
1819
- fix(agents): canonicalize provider aliases in byProvider tool policy lookup [AI]. (#72917) Thanks @pgondhi987.

docs/concepts/active-memory.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ because it follows your existing provider, auth, and model preferences.
8080
If you want Active Memory to feel faster, use a dedicated inference model
8181
instead of borrowing the main chat model. Recall quality matters, but latency
8282
matters more than for the main answer path, and Active Memory's tool surface
83-
is narrow (it only calls `memory_search` and `memory_get`).
83+
is narrow (it only calls available memory recall tools).
8484

8585
Good fast-model options:
8686

@@ -332,8 +332,9 @@ flowchart LR
332332
I --> M["Main Reply"]
333333
```
334334

335-
The blocking memory sub-agent can use only:
335+
The blocking memory sub-agent can use only the available memory recall tools:
336336

337+
- `memory_recall`
337338
- `memory_search`
338339
- `memory_get`
339340

@@ -644,9 +645,10 @@ If active memory is too slow:
644645

645646
## Common issues
646647

647-
Active Memory rides on the normal `memory_search` pipeline under
648-
`agents.defaults.memorySearch`, so most recall surprises are embedding-provider
649-
problems, not Active Memory bugs.
648+
Active Memory rides on the configured memory plugin's recall pipeline, so most
649+
recall surprises are embedding-provider problems, not Active Memory bugs. The
650+
default `memory-core` path uses `memory_search`; `memory-lancedb` uses
651+
`memory_recall`.
650652

651653
<AccordionGroup>
652654
<Accordion title="Embedding provider switched or stopped working">

extensions/active-memory/index.test.ts

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1015,9 +1015,14 @@ describe("active-memory plugin", () => {
10151015
expect(runParams?.prompt).toContain(
10161016
"You receive conversation context, including the user's latest message.",
10171017
);
1018-
expect(runParams?.prompt).toContain("Use only memory_search and memory_get.");
1018+
expect(runParams?.prompt).toContain("Use only the available memory tools.");
1019+
expect(runParams?.prompt).toContain("Prefer memory_recall when available.");
10191020
expect(runParams?.prompt).toContain(
1020-
"When searching for preference or habit recall, use a permissive memory_search threshold before deciding that no useful memory exists.",
1021+
"If memory_recall is unavailable, use memory_search and memory_get.",
1022+
);
1023+
expect(runParams?.toolsAllow).toEqual(["memory_recall", "memory_search", "memory_get"]);
1024+
expect(runParams?.prompt).toContain(
1025+
"When searching for preference or habit recall, use a permissive recall limit or memory_search threshold before deciding that no useful memory exists.",
10211026
);
10221027
expect(runParams?.prompt).toContain(
10231028
"If the user is directly asking about favorites, preferences, habits, routines, or personal facts, treat that as a strong recall signal.",

extensions/active-memory/index.ts

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -848,8 +848,10 @@ function buildRecallPrompt(params: {
848848
"Another model is preparing the final user-facing answer.",
849849
"Your job is to search memory and return only the most relevant memory context for that model.",
850850
"You receive conversation context, including the user's latest message.",
851-
"Use only memory_search and memory_get.",
852-
"When searching for preference or habit recall, use a permissive memory_search threshold before deciding that no useful memory exists.",
851+
"Use only the available memory tools.",
852+
"Prefer memory_recall when available.",
853+
"If memory_recall is unavailable, use memory_search and memory_get.",
854+
"When searching for preference or habit recall, use a permissive recall limit or memory_search threshold before deciding that no useful memory exists.",
853855
"Do not answer the user directly.",
854856
`Prompt style: ${params.config.promptStyle}.`,
855857
...buildPromptStyleLines(params.config.promptStyle),
@@ -1448,14 +1450,18 @@ function extractActiveMemorySearchDebugFromSessionRecord(
14481450
const record = asRecord(value);
14491451
const nestedMessage = asRecord(record?.message);
14501452
const topLevelMessage =
1451-
record?.role === "toolResult" || record?.toolName === "memory_search" ? record : undefined;
1453+
record?.role === "toolResult" ||
1454+
record?.toolName === "memory_search" ||
1455+
record?.toolName === "memory_recall"
1456+
? record
1457+
: undefined;
14521458
const message = nestedMessage ?? topLevelMessage;
14531459
if (!message) {
14541460
return undefined;
14551461
}
14561462
const role = normalizeOptionalString(message.role);
14571463
const toolName = normalizeOptionalString(message.toolName);
1458-
if (role !== "toolResult" || toolName !== "memory_search") {
1464+
if (role !== "toolResult" || (toolName !== "memory_search" && toolName !== "memory_recall")) {
14591465
return undefined;
14601466
}
14611467
const details = asRecord(message.details);
@@ -2072,7 +2078,7 @@ async function runRecallSubagent(params: {
20722078
timeoutMs: params.config.timeoutMs,
20732079
runId: subagentSessionId,
20742080
trigger: "manual",
2075-
toolsAllow: ["memory_search", "memory_get"],
2081+
toolsAllow: ["memory_recall", "memory_search", "memory_get"],
20762082
disableMessageTool: true,
20772083
bootstrapContextMode: "lightweight",
20782084
verboseLevel: "off",

extensions/qa-lab/src/providers/mock-openai/server.test.ts

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1195,7 +1195,9 @@ describe("qa mock openai server", () => {
11951195
type: "input_text",
11961196
text: [
11971197
"You are a memory search agent.",
1198-
"Use only memory_search and memory_get.",
1198+
"Use only the available memory tools.",
1199+
"Prefer memory_recall when available.",
1200+
"If memory_recall is unavailable, use memory_search and memory_get.",
11991201
"",
12001202
"Conversation context:",
12011203
"Latest user message:",
@@ -1208,9 +1210,9 @@ describe("qa mock openai server", () => {
12081210
}),
12091211
});
12101212
expect(activeMemorySearch.status).toBe(200);
1211-
expect(await activeMemorySearch.text()).toContain('"name":"memory_search"');
1213+
expect(await activeMemorySearch.text()).toContain('"name":"memory_recall"');
12121214

1213-
const activeMemoryGet = await fetch(`${server.baseUrl}/v1/responses`, {
1215+
const activeMemoryStreamSummary = await fetch(`${server.baseUrl}/v1/responses`, {
12141216
method: "POST",
12151217
headers: {
12161218
"content-type": "application/json",
@@ -1225,7 +1227,9 @@ describe("qa mock openai server", () => {
12251227
type: "input_text",
12261228
text: [
12271229
"You are a memory search agent.",
1228-
"Use only memory_search and memory_get.",
1230+
"Use only the available memory tools.",
1231+
"Prefer memory_recall when available.",
1232+
"If memory_recall is unavailable, use memory_search and memory_get.",
12291233
"",
12301234
"Conversation context:",
12311235
"Latest user message:",
@@ -1237,20 +1241,14 @@ describe("qa mock openai server", () => {
12371241
{
12381242
type: "function_call_output",
12391243
output: JSON.stringify({
1240-
results: [
1241-
{
1242-
path: "MEMORY.md",
1243-
startLine: 1,
1244-
endLine: 1,
1245-
},
1246-
],
1244+
text: "Stable QA movie night snack preference: lemon pepper wings with blue cheese.",
12471245
}),
12481246
},
12491247
],
12501248
}),
12511249
});
1252-
expect(activeMemoryGet.status).toBe(200);
1253-
expect(await activeMemoryGet.text()).toContain('"name":"memory_get"');
1250+
expect(activeMemoryStreamSummary.status).toBe(200);
1251+
expect(await activeMemoryStreamSummary.text()).toContain("lemon pepper wings with blue cheese");
12541252

12551253
const activeMemorySummary = await fetch(`${server.baseUrl}/v1/responses`, {
12561254
method: "POST",
@@ -1267,7 +1265,9 @@ describe("qa mock openai server", () => {
12671265
type: "input_text",
12681266
text: [
12691267
"You are a memory search agent.",
1270-
"Use only memory_search and memory_get.",
1268+
"Use only the available memory tools.",
1269+
"Prefer memory_recall when available.",
1270+
"If memory_recall is unavailable, use memory_search and memory_get.",
12711271
"",
12721272
"Conversation context:",
12731273
"Latest user message:",

extensions/qa-lab/src/providers/mock-openai/server.ts

Lines changed: 23 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1447,37 +1447,34 @@ async function buildResponsesPayload(
14471447
/silent snack recall check/i.test(allInputText)
14481448
) {
14491449
if (!toolOutput) {
1450-
return buildToolCallEventsWithArgs("memory_search", {
1450+
return buildToolCallEventsWithArgs("memory_recall", {
14511451
query: "QA movie night snack lemon pepper wings blue cheese",
1452-
maxResults: 3,
1453-
});
1454-
}
1455-
const results = Array.isArray(toolJson?.results)
1456-
? (toolJson.results as Array<Record<string, unknown>>)
1457-
: [];
1458-
const first = results[0];
1459-
if (
1460-
typeof first?.path === "string" &&
1461-
(typeof first.startLine === "number" || typeof first.endLine === "number")
1462-
) {
1463-
const from =
1464-
typeof first.startLine === "number"
1465-
? Math.max(1, first.startLine)
1466-
: typeof first.endLine === "number"
1467-
? Math.max(1, first.endLine)
1468-
: 1;
1469-
return buildToolCallEventsWithArgs("memory_get", {
1470-
path: first.path,
1471-
from,
1472-
lines: 4,
1452+
limit: 3,
14731453
});
14741454
}
1475-
const memorySnippet =
1455+
const memoryText =
14761456
typeof toolJson?.text === "string"
14771457
? toolJson.text
1478-
: Array.isArray(toolJson?.results)
1479-
? JSON.stringify(toolJson.results)
1480-
: toolOutput;
1458+
: Array.isArray(toolJson?.content)
1459+
? toolJson.content
1460+
.map((item) =>
1461+
typeof item === "object" && item && "text" in item && typeof item.text === "string"
1462+
? item.text
1463+
: "",
1464+
)
1465+
.filter(Boolean)
1466+
.join("\n")
1467+
: undefined;
1468+
if (memoryText) {
1469+
const snackPreference = extractSnackPreference(memoryText);
1470+
if (snackPreference) {
1471+
return buildAssistantEvents(`User usually wants ${snackPreference} for QA movie night.`);
1472+
}
1473+
return buildAssistantEvents("NONE");
1474+
}
1475+
const memorySnippet = Array.isArray(toolJson?.results)
1476+
? JSON.stringify(toolJson.results)
1477+
: toolOutput;
14811478
const snackPreference = extractSnackPreference(memorySnippet);
14821479
if (snackPreference) {
14831480
return buildAssistantEvents(`User usually wants ${snackPreference} for QA movie night.`);

0 commit comments

Comments
 (0)