You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(anthropic): add the new compaction feature (#12384)
## Background
anthropic released a new compaction feature along with their launch of
Opus 4.6
https://platform.claude.com/docs/en/build-with-claude/compaction
## Summary
- updated the schema
- updated the usage token calculation since the outer level params don't
account for the compaction tokens
## Manual Verification
verified by running the examples:
- [x]
`examples/ai-functions/src/generate-text/anthropic-compaction-pause.ts`
- [x] `examples/ai-functions/src/generate-text/anthropic-compaction.ts`
- [x] `examples/ai-functions/src/stream-text/anthropic-compaction.ts`
- [x] `http://localhost:3000/use-chat-anthropic-compaction`
- large context is preloaded into the model
- just send a simple follow up message and you'll notice the compaction
block created (also noticed in the console logs)
- multi-turn conversations work properly
## Checklist
- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
## Related Issues
fixes#12297
@@ -290,6 +290,98 @@ const result = await generateText({
290
290
});
291
291
```
292
292
293
+
#### Compaction
294
+
295
+
The `compact_20260112` edit type automatically summarizes earlier conversation context when token limits are reached. This is useful for long-running conversations where you want to preserve the essence of earlier exchanges while staying within token limits.
-**instructions** - Custom instructions for how the model should summarize the conversation. Use this to guide the compaction summary towards specific aspects of the conversation you want to preserve.
329
+
-**pauseAfterCompaction** - When `true`, the model will pause after generating the compaction summary, allowing you to inspect or process it before continuing. Defaults to `false`.
330
+
331
+
When compaction occurs, the model generates a summary of the earlier context. This summary appears as a text block with special provider metadata.
332
+
333
+
##### Detecting Compaction in Streams
334
+
335
+
When using `streamText`, you can detect compaction summaries by checking the `providerMetadata` on `text-start` events:
When using `useChat` or other UI hooks, compaction summaries appear as regular text parts with `providerMetadata`. You can style them differently in your UI:
0 commit comments