Skip to content

feat: CLI overhaul, printer refactor, skills v2, and LLM fixes#37

Merged
v3g42 merged 20 commits intomainfrom
fix/distri
Mar 28, 2026
Merged

feat: CLI overhaul, printer refactor, skills v2, and LLM fixes#37
v3g42 merged 20 commits intomainfrom
fix/distri

Conversation

@v3g42
Copy link
Copy Markdown
Contributor

@v3g42 v3g42 commented Mar 28, 2026

Summary

Major improvements to the CLI experience, client library, and server-side skill infrastructure.

CLI (distri-cli)

  • Multiline paste support: replaced inquire::Text with rustyline + bracket paste mode so pasted multi-line text stays as a single message
  • Thread resume: --resume <id|last> flag on tui/run commands; prints message history on resume for context
  • Refactored into modules: split monolithic main.rs into chat.rs, commands.rs, config.rs, input.rs, message.rs, threads.rs, tools.rs
  • Better output: agent display name instead of generic "assistant:", thread ID in header/exit, /tools toggle and Ctrl+O shortcut to hide tool output, suppress internal probe calls
  • Double Ctrl+C to exit: first press shows hint, second press within 2s exits

Client library (distri)

  • Printer refactor: improved rendering across all tool result types (browser, code, data, search, shell)
  • New platform renderer: added renderers/platform.rs
  • api_request_tool: new module replacing platform_tool.rs

Server (distri-core)

  • Skills v2 (#34): SkillRelation, path field, graph traversal on SkillStore trait; tool_search now matches tags and paths
  • Dry-run simulator: safe tools execute normally, unsafe tools get LLM-simulated responses (gpt-5-nano); dry_run flag propagated from client metadata through A2A to ExecutorContext
  • LLM fix: use max_completion_tokens instead of max_tokens for newer OpenAI models (gpt-4.1, gpt-5, o-series)

Misc

  • Removed hardcoded model mentions from agent definitions
  • Default model settings fix

Test plan

  • cargo check passes across workspace
  • cargo test passes
  • CLI: verify multiline paste, --resume last, /tools toggle, Ctrl+O, double Ctrl+C
  • Dry-run mode: send {"dry_run": true} in message metadata and confirm unsafe tools are simulated

v3g42 and others added 15 commits March 27, 2026 01:10
Switches the interactive chat input from inquire::Text to rustyline with
bracket paste mode enabled, so pasted multi-line text stays as a single
message instead of being split on each newline. Adds slash-command tab
completion via rustyline's Completer trait and a placeholder hint via
Hinter. History is now managed natively by rustyline with arrow-key
navigation.

https://claude.ai/code/session_017YeCUYbAKZqsGV4599Rrso
…sion

- Replace "assistant:" label with "◆ agent_name:" using the current agent's
  display name instead of a generic role label
- Add /tools slash command to toggle tool call output on/off, letting users
  hide noisy tool blocks and focus on agent responses
- Suppress internal probe calls (load_skill("?"), discovery GET requests)
  from non-verbose output since they're agent internals
- Show thread ID in welcome header and on exit so users can easily resume
  sessions with /resume last or /resume <id>
- Show short thread ID when context is cleared via /clear

https://claude.ai/code/session_017YeCUYbAKZqsGV4599Rrso
Use rustyline's ConditionalEventHandler to bind Ctrl+O as a toggle for
tool output visibility. The hint text updates to show "[tools hidden ·
Ctrl+O]" when tools are hidden. Shared state via Arc<AtomicBool> between
the helper (for hints) and the handler (for toggling). Help message now
has a dedicated KEYBOARD SHORTCUTS section.

https://claude.ai/code/session_017YeCUYbAKZqsGV4599Rrso
- Add --resume <id|last> flag to both `tui` and `run` commands so users
  can resume a thread directly from the CLI without entering interactive
  mode first
- Implement double Ctrl+C to exit: first press shows a hint, second
  press within 2 seconds exits. Ctrl+D still exits immediately
- Print thread message history when resuming (via --resume flag or
  /resume command), showing user prompts and assistant responses in a
  compact gray format so the user has context
- Add get_thread_messages() to the Distri client for fetching thread
  message history from the server

https://claude.ai/code/session_017YeCUYbAKZqsGV4599Rrso
… gpt-5, o-series)

These models reject the legacy max_tokens parameter. Now dynamically selects
which parameter to use based on model name. Only one is ever sent per request.
* feat(types): add SkillRelation, path, and graph traversal methods to SkillStore trait

* feat(types): add path field to NewSkill and SkillListItem for hierarchy support

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(stores): add path and relations fields to SkillRecord in diesel_store

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(tools): add tag and path matching to tool_search

- tool_search now matches on tags and path fields (present on skill-backed tools)
- Include tags/path in search result JSON when available
- Add path field to AvailableSkill type for prompt rendering
- Orchestrator includes path in skill list when present: (id: `...`, path: ...)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(tools): add dry-run simulator — safe tools execute, unsafe tools get LLM-simulated responses

- simulator.rs: calls gpt-5-nano (Azure) to generate plausible tool responses
- Whitelist of safe tools (tool_search, load_skill, write_todos, search, final)
- Unsafe tools (api_request, connection_request, execute_shell, etc.) are simulated
- ExecutorContext gets dry_run: bool flag
- execute_tool_calls_with_timeout checks flag before each tool execution
- Simulated responses marked with _simulated: true

* feat(api): propagate dry_run flag from message metadata to ExecutorContext

Add dry_run field to ExecutorContextMetadata so clients can pass
{"dry_run": true} in the JSON-RPC message metadata. The A2A handler
reads this field and sets ExecutorContext.dry_run, enabling the eval
simulator to trigger simulated tool execution without side effects.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(simulator): intercept external tools (api_request) in dry-run mode before handle_external_tool_inline

External tools were bypassing dry-run check and going straight to
handle_external_tool_inline which waits for a client response that never
comes in eval mode. Now dry-run check happens BEFORE the external tool check.

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
@v3g42 v3g42 changed the title Fix/distri feat: CLI overhaul, printer refactor, skills v2, and LLM fixes Mar 28, 2026
v3g42 and others added 5 commits March 28, 2026 11:02
…ill-hierarchy branch for testing

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…OAuth proxy

api_request now supports two modes:
- path: Distri platform API (existing)
- url + connection_id: external API with auto-injected OAuth token (new)

Agent calls: api_request({method: 'GET', url: 'https://googleapis.com/...', connection_id: '<id>'})
Tool proxies via POST /connections/{id}/request, token auto-injected.
@v3g42 v3g42 merged commit 4a0c2de into main Mar 28, 2026
@v3g42 v3g42 deleted the fix/distri branch March 30, 2026 19:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants