✨ feat(editor): implement AI input auto-completion#13458
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
@canisminor1990 @ONLY-yours - This PR implements AI input auto-completion via the editor's |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## canary #13458 +/- ##
==========================================
- Coverage 66.43% 66.42% -0.01%
==========================================
Files 1976 1977 +1
Lines 163601 163634 +33
Branches 18709 16511 -2198
==========================================
+ Hits 108695 108701 +6
- Misses 54784 54811 +27
Partials 122 122
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f0263d598a
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
6965591 to
dad86ae
Compare
b317537 to
e2ae00f
Compare
9e2cebe to
e2ae00f
Compare
d50e988 to
aff5013
Compare
3146e2e to
a27edf1
Compare
Adds GitHub Copilot-style ghost text completion to the chat input, powered by a configurable system agent (disabled by default). Key changes: - Add `inputCompletion` system agent config (type, default, selector, i18n) - Create `chainInputCompletion` prompt chain (V2 few-shot, benchmarked) - Mount `ReactAutoCompletePlugin` in InputEditor when enabled - Wire `getMessages` through ChatInput store for conversation context - Add settings UI in Service Model page with enable toggle Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Filter getMessages to use displayMessages (active visible thread) instead of dbMessages (raw DB records including tool messages and inactive branches). Also limit to last 10 user/assistant turns to keep payload small and relevant. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Autocomplete needs the full result before displaying ghost text, so streaming adds unnecessary overhead. Setting stream: false reduces latency by avoiding SSE chunking. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
fetchPresetTaskResult uses fetchSSE internally which cannot handle non-streaming JSON responses, causing the editor to freeze after receiving the completion result. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Autocomplete waits for the full result before displaying ghost text. fetchSSE handles non-streaming responses via its fallback path (response.clone().text()), avoiding SSE chunking overhead. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Call getChatCompletion directly instead of fetchPresetTaskResult to avoid triggering agentDocument.getDocuments on every autocomplete request. Input completion only needs a simple LLM call with the prompt chain, not the full context engineering pipeline. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use the standard contextEngineering pipeline. The agentDocument overhead will be addressed separately. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
a27edf1 to
6dae86f
Compare
# 🚀 release: 20260407 This release includes **148 commits**. Key updates are below. - **Response API tool execution is more capable and reliable** — Added hosted builtin tools + client-side function tools and improved tool-call streaming/completion behavior. [#13406](#13406) [#13414](#13414) [#13506](#13506) [#13555](#13555) - **Input and composition UX upgraded** — Added AI input auto-completion and multiple chat-input stability fixes. [#13458](#13458) [#13551](#13551) [#13481](#13481) - **Model/provider compatibility improved** — Better Gemini/Google tool schema handling and additional model updates. [#13429](#13429) [#13465](#13465) [#13613](#13613) - **Desktop and CLI reliability improved** — Gateway WebSocket support and desktop runtime upgrades. [#13608](#13608) [#13550](#13550) [#13557](#13557) - **Security hardening continued** — Fixed auth and sanitization risks and upgraded vulnerable dependencies. [#13535](#13535) [#13529](#13529) [#13479](#13479) ### Models & Providers - Added/updated support for `glm-5v-turbo`, GLM-5.1 updates, and qwen3.5-omni series. [#13487](#13487) [#13405](#13405) [#13422](#13422) - Added additional ImageGen providers/models (Wanxiang 2.7 and Keling from Qwen). [#13478](#13478) - Improved Gemini/Google tool schema and compatibility handling across runtime paths. [#13429](#13429) [#13465](#13465) [#13613](#13613) ### Response API & Runtime - Added hosted builtin tools in Response API and client-side function tool execution support. [#13406](#13406) [#13414](#13414) - Improved stream tool-call argument handling and `response.completed` output correctness. [#13506](#13506) [#13555](#13555) - Improved runtime error/context handling for intervention and provider edge cases. [#13420](#13420) [#13607](#13607) ### Desktop App - Bumped desktop dependencies and runtime integrations (`agent-browser`, `electron`). [#13550](#13550) [#13557](#13557) - Simplified desktop release channel setup by removing nightly release flow. [#13480](#13480) ### CLI - Added OpenClaw migration command. [#13566](#13566) - Added local device binding support for `lh agent run`. [#13277](#13277) - Added WebSocket gateway support and reconnect reliability improvements. [#13608](#13608) [#13418](#13418) ### Security - Removed risky `apiKey` fallback behavior in webapi auth path to prevent bypass risk. [#13535](#13535) - Sanitized HTML artifact rendering and iframe sandboxing to reduce XSS-to-RCE risk. [#13529](#13529) - Upgraded nodemailer to v8 to address SMTP command injection advisory. [#13479](#13479) ### Bug Fixes - Fixed image generation model default switch issues. [#13587](#13587) - Fixed subtopic re-fork message scope behavior and agent panel reset edge cases. [#13606](#13606) [#13556](#13556) - Fixed chat-input freeze on paste and mention plugin behavior. [#13551](#13551) [#13415](#13415) - Fixed auth/social sign-in and settings UX edge cases. [#13368](#13368) [#13392](#13392) [#13338](#13338) ### Credits Huge thanks to these contributors: @chriszf @hardy-one @Innei @lijian @neko @OctopusNote @rdmclin2 @rivertwilight @RylanCai @suyua9 @sxjeru @Tsuki @wangyk @WindSpiritSR @yizhuo @YuTengjing @hezhijie0327 @arvinxx
💻 Change Type
🔗 Related Issue
Fixes LOBE-6586
close #10486
🔀 Description of Change
Implements GitHub Copilot-style ghost text auto-completion in the chat input, powered by
@lobehub/editor'sReactAutoCompletePlugin.System Agent Config:
inputCompletion: SystemAgentIteminUserSystemAgentConfigenabled: false, mini modelPrompt Chain (V2 — benchmarked optimal):
max_tokens: 100to keep completions shortEditor Integration:
ReactAutoCompletePluginwhen enabledabortSignalfor cancellationchatService.fetchPresetTaskResultto get completionsData Flow:
getMessagescallback wired through ChatInput store → StoreUpdater → ProviderdbMessagesas context; Home page has no contextBenchmark Results (84 runs, 21 scenarios × 4 models):
gpt-5.4-mini— 21/21 PASS, ~3.9s latency, best quality stabilitygroq/llama-4-scout— ~3.2s latency, $0.00004/call🧪 How to Test
Tabto accept, or continue typing to dismiss📝 Additional Information