Merged
Conversation
- Change LLM.stream to return raw async iterable; move chunk parsing and state updates into sendChat - Simplify OpenAI and Anthropic stream methods to just return the stream - Add webSearch and send helper methods to AnthropicAdapter - Default ANTHROPIC_WEB_SEARCH_MODEL to fall back on ANTHROPIC_MODEL
- Increase MAX_HISTORY from 16→32 and MAX_ITERATIONS 16→24 - Simplify trimHistory to keep only first + last N messages - Stream API: remove unused state & opts args from LLM.stream calls and provider impls - Cleanup tools: drop log param in getTools, remove log calls - Re-enable “help” tool in commonTools - Add llm parameter to fsTools for consistency
- Introduce LLMAdapter interface (init, stream, send, webSearch) - Extract MAX_CHAT_HISTORY & MAX_TOOL_ITERATIONS constants - Replace SendChatOpts with direct onChunk callback in sendChat - Rename streamOpenAIResponse → streamLLMResponse - Add PreparedTools type and strongly-type getTools & tool factories - Update OpenAI & Anthropic providers to implement LLMAdapter
- Require ANTHROPIC_API_KEY alongside OPENAI_API_KEY - Note current support for OpenAI and Anthropic providers - Add example config fields: provider, reasoning_effort, webSearchModel
- Add LLMAdapter import in tools.ts - Change llm parameter type from any to LLMAdapter - Suppress unused-vars lint in fsTools with deno-lint-ignore
…andling - Add clipboard functionality to copy chat history - Add error handling for JSON parsing in sendChat function - Update the help tool to use strict schema validation with context parameter - Enhance git tools with proper schema validation using zod - Improve function parameter formatting for better readability
- Change 'help' hint to 'help!' in chat placeholder - Fix function argument serialization in sendChat - Remove debug clipboard write of history - Simplify help tool by removing unnecessary parameters - Clean up git tools by removing redundant strict schema validation - Update system prompt to use help tool appropriately
kamilmac
commented
May 17, 2025
| export interface Config { | ||
| provider?: "openai" | "anthropic" | "gemini"; | ||
| model?: string; | ||
| reasoning_effort?: string; |
Owner
Author
There was a problem hiding this comment.
fix reasnoning effort suppport
Add configuration for Ollama as an LLM provider, including default settings for base URL and API key. Move provider configuration from constants to the default LLM adapter, and refactor variable names for better clarity.
When handling streaming Gemini responses where tool calls initially lack an ID and content, use the tool's function name (`fnName`) as the temporary ID. Updates the temporary content string for clarity.
Adjust import paths for LLMAdapter following refactoring of LLM providers.
- Introduce `autoscrollActive` state to manage autoscrolling. - Autoscroll only when streaming and `autoscrollActive` is true. - Disable autoscroll on any user input (scrolling, navigation). - Re-enable autoscroll when a new stream begins or user navigates to the end.
…ssaging - Renamed function name from “git_auto_commit” to “git_commit” - Expanded description to call out “No reviews – just straight commit” - Updated return text to “Return directly to user and show this review”
- Change default reasoning_effort from “standard” to “medium” and document its low/medium/high options - Rename README field webSearchModel → web_search_model for consistency - Pass reasoning_effort through to the LLM API in default provider implementation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.