feat: CLI overhaul, printer refactor, skills v2, and LLM fixes#37
Merged
feat: CLI overhaul, printer refactor, skills v2, and LLM fixes#37
Conversation
Switches the interactive chat input from inquire::Text to rustyline with bracket paste mode enabled, so pasted multi-line text stays as a single message instead of being split on each newline. Adds slash-command tab completion via rustyline's Completer trait and a placeholder hint via Hinter. History is now managed natively by rustyline with arrow-key navigation. https://claude.ai/code/session_017YeCUYbAKZqsGV4599Rrso
…5r' into fix/distri
…sion
- Replace "assistant:" label with "◆ agent_name:" using the current agent's
display name instead of a generic role label
- Add /tools slash command to toggle tool call output on/off, letting users
hide noisy tool blocks and focus on agent responses
- Suppress internal probe calls (load_skill("?"), discovery GET requests)
from non-verbose output since they're agent internals
- Show thread ID in welcome header and on exit so users can easily resume
sessions with /resume last or /resume <id>
- Show short thread ID when context is cleared via /clear
https://claude.ai/code/session_017YeCUYbAKZqsGV4599Rrso
…5r' into fix/distri
Use rustyline's ConditionalEventHandler to bind Ctrl+O as a toggle for tool output visibility. The hint text updates to show "[tools hidden · Ctrl+O]" when tools are hidden. Shared state via Arc<AtomicBool> between the helper (for hints) and the handler (for toggling). Help message now has a dedicated KEYBOARD SHORTCUTS section. https://claude.ai/code/session_017YeCUYbAKZqsGV4599Rrso
- Add --resume <id|last> flag to both `tui` and `run` commands so users can resume a thread directly from the CLI without entering interactive mode first - Implement double Ctrl+C to exit: first press shows a hint, second press within 2 seconds exits. Ctrl+D still exits immediately - Print thread message history when resuming (via --resume flag or /resume command), showing user prompts and assistant responses in a compact gray format so the user has context - Add get_thread_messages() to the Distri client for fetching thread message history from the server https://claude.ai/code/session_017YeCUYbAKZqsGV4599Rrso
…5r' into fix/distri
… gpt-5, o-series) These models reject the legacy max_tokens parameter. Now dynamically selects which parameter to use based on model name. Only one is ever sent per request.
* feat(types): add SkillRelation, path, and graph traversal methods to SkillStore trait
* feat(types): add path field to NewSkill and SkillListItem for hierarchy support
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(stores): add path and relations fields to SkillRecord in diesel_store
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(tools): add tag and path matching to tool_search
- tool_search now matches on tags and path fields (present on skill-backed tools)
- Include tags/path in search result JSON when available
- Add path field to AvailableSkill type for prompt rendering
- Orchestrator includes path in skill list when present: (id: `...`, path: ...)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(tools): add dry-run simulator — safe tools execute, unsafe tools get LLM-simulated responses
- simulator.rs: calls gpt-5-nano (Azure) to generate plausible tool responses
- Whitelist of safe tools (tool_search, load_skill, write_todos, search, final)
- Unsafe tools (api_request, connection_request, execute_shell, etc.) are simulated
- ExecutorContext gets dry_run: bool flag
- execute_tool_calls_with_timeout checks flag before each tool execution
- Simulated responses marked with _simulated: true
* feat(api): propagate dry_run flag from message metadata to ExecutorContext
Add dry_run field to ExecutorContextMetadata so clients can pass
{"dry_run": true} in the JSON-RPC message metadata. The A2A handler
reads this field and sets ExecutorContext.dry_run, enabling the eval
simulator to trigger simulated tool execution without side effects.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(simulator): intercept external tools (api_request) in dry-run mode before handle_external_tool_inline
External tools were bypassing dry-run check and going straight to
handle_external_tool_inline which waits for a client response that never
comes in eval mode. Now dry-run check happens BEFORE the external tool check.
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…ill-hierarchy branch for testing Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…OAuth proxy
api_request now supports two modes:
- path: Distri platform API (existing)
- url + connection_id: external API with auto-injected OAuth token (new)
Agent calls: api_request({method: 'GET', url: 'https://googleapis.com/...', connection_id: '<id>'})
Tool proxies via POST /connections/{id}/request, token auto-injected.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Major improvements to the CLI experience, client library, and server-side skill infrastructure.
CLI (
distri-cli)inquire::Textwithrustyline+ bracket paste mode so pasted multi-line text stays as a single message--resume <id|last>flag ontui/runcommands; prints message history on resume for contextmain.rsintochat.rs,commands.rs,config.rs,input.rs,message.rs,threads.rs,tools.rs/toolstoggle andCtrl+Oshortcut to hide tool output, suppress internal probe callsClient library (
distri)renderers/platform.rsapi_request_tool: new module replacingplatform_tool.rsServer (
distri-core)#34):SkillRelation,pathfield, graph traversal onSkillStoretrait;tool_searchnow matches tags and pathsdry_runflag propagated from client metadata through A2A toExecutorContextmax_completion_tokensinstead ofmax_tokensfor newer OpenAI models (gpt-4.1, gpt-5, o-series)Misc
Test plan
cargo checkpasses across workspacecargo testpasses--resume last,/toolstoggle,Ctrl+O, double Ctrl+C{"dry_run": true}in message metadata and confirm unsafe tools are simulated