Skip to content

✨ feat(editor): implement AI input auto-completion#13458

Merged
arvinxx merged 9 commits intocanaryfrom
feat/lobe-6586-input-auto-completion
Apr 2, 2026
Merged

✨ feat(editor): implement AI input auto-completion#13458
arvinxx merged 9 commits intocanaryfrom
feat/lobe-6586-input-auto-completion

Conversation

@arvinxx
Copy link
Copy Markdown
Member

@arvinxx arvinxx commented Apr 1, 2026

💻 Change Type

  • ✨ feat

🔗 Related Issue

Fixes LOBE-6586
close #10486

🔀 Description of Change

Implements GitHub Copilot-style ghost text auto-completion in the chat input, powered by @lobehub/editor's ReactAutoCompletePlugin.

System Agent Config:

  • New inputCompletion: SystemAgentItem in UserSystemAgentConfig
  • Default: enabled: false, mini model
  • Settings UI in Service Model page with enable toggle

Prompt Chain (V2 — benchmarked optimal):

  • Concise system prompt with few-shot examples
  • Supports conversation context for contextually relevant completions
  • max_tokens: 100 to keep completions short

Editor Integration:

  • Conditionally mounts ReactAutoCompletePlugin when enabled
  • 600ms delay before triggering completion
  • Uses plugin's built-in abortSignal for cancellation
  • Calls chatService.fetchPresetTaskResult to get completions

Data Flow:

  • getMessages callback wired through ChatInput store → StoreUpdater → Provider
  • Conversation page provides dbMessages as context; Home page has no context

Benchmark Results (84 runs, 21 scenarios × 4 models):

  • Default model: gpt-5.4-mini — 21/21 PASS, ~3.9s latency, best quality stability
  • Speed option: groq/llama-4-scout — ~3.2s latency, $0.00004/call
  • Full results: LOBE-6596

🧪 How to Test

  1. Go to Settings → Service Model → Enable "Input Auto-Completion Agent"
  2. Navigate to any chat conversation
  3. Type a partial message (e.g., "How do I implement ") and wait ~1 second
  4. Ghost text should appear in gray after the cursor
  5. Press Tab to accept, or continue typing to dismiss
  • Tested locally
  • Added/updated tests
  • No tests needed

📝 Additional Information

  • Feature is disabled by default — users must opt-in via settings
  • Prompt was selected through systematic benchmarking (V2 few-shot beat V1 XML-tags and V3 structured across all models)
  • 10+ models tested; rejected models documented with reasons in benchmark sub-issue LOBE-6596

@vercel
Copy link
Copy Markdown

vercel bot commented Apr 1, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
lobehub Ready Ready Preview, Comment Apr 2, 2026 5:58pm

Request Review

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry @arvinxx, you have reached your weekly rate limit of 500000 diff characters.

Please try again later or upgrade to continue using Sourcery

@lobehubbot
Copy link
Copy Markdown
Member

@canisminor1990 @ONLY-yours - This PR implements AI input auto-completion via the editor's ReactAutoCompletePlugin. It touches the ChatInput editor integration (primary) and settings/system agent config (secondary). Please coordinate on review.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 1, 2026

Codecov Report

❌ Patch coverage is 24.24242% with 25 lines in your changes missing coverage. Please review.
✅ Project coverage is 66.42%. Comparing base (dd7819b) to head (6dae86f).
⚠️ Report is 3 commits behind head on canary.

Additional details and impacted files
@@            Coverage Diff             @@
##           canary   #13458      +/-   ##
==========================================
- Coverage   66.43%   66.42%   -0.01%     
==========================================
  Files        1976     1977       +1     
  Lines      163601   163634      +33     
  Branches    18709    16511    -2198     
==========================================
+ Hits       108695   108701       +6     
- Misses      54784    54811      +27     
  Partials      122      122              
Flag Coverage Δ
app 58.08% <85.71%> (-0.01%) ⬇️
database 92.57% <ø> (ø)
packages/agent-runtime 88.98% <ø> (ø)
packages/context-engine 86.51% <ø> (ø)
packages/conversation-flow 92.36% <ø> (ø)
packages/file-loaders 87.02% <ø> (ø)
packages/memory-user-memory 66.68% <ø> (ø)
packages/model-bank 99.85% <ø> (ø)
packages/model-runtime 84.68% <ø> (ø)
packages/prompts 65.80% <7.69%> (-0.69%) ⬇️
packages/python-interpreter 92.90% <ø> (ø)
packages/ssrf-safe-fetch 0.00% <ø> (ø)
packages/utils 90.41% <ø> (ø)
packages/web-crawler 88.82% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Store 66.55% <100.00%> (+<0.01%) ⬆️
Services 49.05% <ø> (ø)
Server 65.94% <ø> (-0.01%) ⬇️
Libs 51.03% <ø> (ø)
Utils 91.01% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f0263d598a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

arvinxx and others added 3 commits April 3, 2026 01:47
Adds GitHub Copilot-style ghost text completion to the chat input,
powered by a configurable system agent (disabled by default).

Key changes:
- Add `inputCompletion` system agent config (type, default, selector, i18n)
- Create `chainInputCompletion` prompt chain (V2 few-shot, benchmarked)
- Mount `ReactAutoCompletePlugin` in InputEditor when enabled
- Wire `getMessages` through ChatInput store for conversation context
- Add settings UI in Service Model page with enable toggle

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Filter getMessages to use displayMessages (active visible thread)
instead of dbMessages (raw DB records including tool messages and
inactive branches). Also limit to last 10 user/assistant turns to
keep payload small and relevant.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
arvinxx and others added 6 commits April 3, 2026 01:47
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Autocomplete needs the full result before displaying ghost text,
so streaming adds unnecessary overhead. Setting stream: false
reduces latency by avoiding SSE chunking.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
fetchPresetTaskResult uses fetchSSE internally which cannot handle
non-streaming JSON responses, causing the editor to freeze after
receiving the completion result.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Autocomplete waits for the full result before displaying ghost text.
fetchSSE handles non-streaming responses via its fallback path
(response.clone().text()), avoiding SSE chunking overhead.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Call getChatCompletion directly instead of fetchPresetTaskResult
to avoid triggering agentDocument.getDocuments on every autocomplete
request. Input completion only needs a simple LLM call with the
prompt chain, not the full context engineering pipeline.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use the standard contextEngineering pipeline. The agentDocument
overhead will be addressed separately.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@arvinxx arvinxx force-pushed the feat/lobe-6586-input-auto-completion branch from a27edf1 to 6dae86f Compare April 2, 2026 17:47
@arvinxx arvinxx merged commit 251e12c into canary Apr 2, 2026
34 checks passed
@arvinxx arvinxx deleted the feat/lobe-6586-input-auto-completion branch April 2, 2026 18:00
arvinxx added a commit that referenced this pull request Apr 7, 2026
# 🚀 release: 20260407

This release includes **148 commits**. Key updates are below.

- **Response API tool execution is more capable and reliable** — Added
hosted builtin tools + client-side function tools and improved tool-call
streaming/completion behavior.
[#13406](#13406)
[#13414](#13414)
[#13506](#13506)
[#13555](#13555)
- **Input and composition UX upgraded** — Added AI input auto-completion
and multiple chat-input stability fixes.
[#13458](#13458)
[#13551](#13551)
[#13481](#13481)
- **Model/provider compatibility improved** — Better Gemini/Google tool
schema handling and additional model updates.
[#13429](#13429)
[#13465](#13465)
[#13613](#13613)
- **Desktop and CLI reliability improved** — Gateway WebSocket support
and desktop runtime upgrades.
[#13608](#13608)
[#13550](#13550)
[#13557](#13557)
- **Security hardening continued** — Fixed auth and sanitization risks
and upgraded vulnerable dependencies.
[#13535](#13535)
[#13529](#13529)
[#13479](#13479)

### Models & Providers

- Added/updated support for `glm-5v-turbo`, GLM-5.1 updates, and
qwen3.5-omni series.
[#13487](#13487)
[#13405](#13405)
[#13422](#13422)
- Added additional ImageGen providers/models (Wanxiang 2.7 and Keling
from Qwen). [#13478](#13478)
- Improved Gemini/Google tool schema and compatibility handling across
runtime paths. [#13429](#13429)
[#13465](#13465)
[#13613](#13613)

### Response API & Runtime

- Added hosted builtin tools in Response API and client-side function
tool execution support.
[#13406](#13406)
[#13414](#13414)
- Improved stream tool-call argument handling and `response.completed`
output correctness.
[#13506](#13506)
[#13555](#13555)
- Improved runtime error/context handling for intervention and provider
edge cases. [#13420](#13420)
[#13607](#13607)

### Desktop App

- Bumped desktop dependencies and runtime integrations (`agent-browser`,
`electron`). [#13550](#13550)
[#13557](#13557)
- Simplified desktop release channel setup by removing nightly release
flow. [#13480](#13480)

### CLI

- Added OpenClaw migration command.
[#13566](#13566)
- Added local device binding support for `lh agent run`.
[#13277](#13277)
- Added WebSocket gateway support and reconnect reliability
improvements. [#13608](#13608)
[#13418](#13418)

### Security

- Removed risky `apiKey` fallback behavior in webapi auth path to
prevent bypass risk.
[#13535](#13535)
- Sanitized HTML artifact rendering and iframe sandboxing to reduce
XSS-to-RCE risk. [#13529](#13529)
- Upgraded nodemailer to v8 to address SMTP command injection advisory.
[#13479](#13479)

### Bug Fixes

- Fixed image generation model default switch issues.
[#13587](#13587)
- Fixed subtopic re-fork message scope behavior and agent panel reset
edge cases. [#13606](#13606)
[#13556](#13556)
- Fixed chat-input freeze on paste and mention plugin behavior.
[#13551](#13551)
[#13415](#13415)
- Fixed auth/social sign-in and settings UX edge cases.
[#13368](#13368)
[#13392](#13392)
[#13338](#13338)

### Credits

Huge thanks to these contributors:

@chriszf @hardy-one @Innei @lijian @neko @OctopusNote @rdmclin2
@rivertwilight @RylanCai @suyua9 @sxjeru @Tsuki @wangyk @WindSpiritSR
@yizhuo @YuTengjing @hezhijie0327 @arvinxx
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Request] Automatically completed by AI as you type

2 participants