Skip to content

feat: implement simple query service and enhance provider abstraction#454

Merged
webdevcody merged 4 commits intov0.11.0rcfrom
abstract-anthropic-sdk
Jan 13, 2026
Merged

feat: implement simple query service and enhance provider abstraction#454
webdevcody merged 4 commits intov0.11.0rcfrom
abstract-anthropic-sdk

Conversation

@webdevcody
Copy link
Copy Markdown
Collaborator

  • Introduced a new Simple Query Service to streamline basic AI queries, allowing for structured JSON outputs.
  • Updated existing routes to utilize the new service, replacing direct SDK calls with a unified interface for querying.
  • Enhanced provider handling in various routes, including generate-spec, generate-features-from-spec, and validate-issue, to support both Claude and Cursor models seamlessly.
  • Added structured output support for improved response handling and error management across the application.

- Introduced a new Simple Query Service to streamline basic AI queries, allowing for structured JSON outputs.
- Updated existing routes to utilize the new service, replacing direct SDK calls with a unified interface for querying.
- Enhanced provider handling in various routes, including generate-spec, generate-features-from-spec, and validate-issue, to support both Claude and Cursor models seamlessly.
- Added structured output support for improved response handling and error management across the application.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jan 12, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @webdevcody, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant architectural improvement by centralizing AI query operations into a new SimpleQueryService. This service acts as a robust abstraction layer, simplifying how different parts of the application interact with various AI models and providers. The primary impact is a cleaner, more maintainable codebase, with consistent handling of AI responses, including structured JSON outputs, across all integrated features.

Highlights

  • New Simple Query Service: Introduced a SimpleQueryService with simpleQuery and streamingQuery functions to provide a unified, abstracted interface for basic AI queries, eliminating direct SDK calls in route files and centralizing provider selection and response handling.
  • Structured JSON Output Support: Enhanced the query mechanism to support structured JSON outputs, allowing for more reliable parsing of AI responses, especially for tasks like spec generation and issue validation.
  • Consolidated AI Query Logic: Refactored numerous routes (generate-spec, generate-features-from-spec, describe-file, describe-image, enhance, generate-title, validate-issue, generate-suggestions) to utilize the new SimpleQueryService, significantly reducing boilerplate and improving consistency across the application.
  • Improved Provider Abstraction: The new service seamlessly handles routing to appropriate AI providers (e.g., Claude, Cursor) based on the model, abstracting away provider-specific implementation details and ensuring consistent behavior for both streaming and non-streaming queries.
  • Removed Redundant Code: Eliminated duplicate extractTextFromStream functions and provider-specific logic from various route handlers, as these concerns are now managed by the SimpleQueryService.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant improvement by creating a simple-query-service to abstract AI provider interactions. The refactoring of various routes to use this new service greatly simplifies the codebase, removing provider-specific logic (like handling Claude vs. Cursor differently) and centralizing the query execution flow. This enhances maintainability and makes it easier to support new providers in the future. The changes are well-executed and represent a solid architectural enhancement. I've included one suggestion to further improve the new service by reducing some code duplication.

Comment on lines +97 to +240
export async function simpleQuery(options: SimpleQueryOptions): Promise<SimpleQueryResult> {
const model = options.model || DEFAULT_MODEL;
const provider = ProviderFactory.getProviderForModel(model);

let responseText = '';
let structuredOutput: Record<string, unknown> | undefined;

// Build provider options
const providerOptions = {
prompt: options.prompt,
model: model,
cwd: options.cwd,
systemPrompt: options.systemPrompt,
maxTurns: options.maxTurns ?? 1,
allowedTools: options.allowedTools ?? [],
abortController: options.abortController,
outputFormat: options.outputFormat,
thinkingLevel: options.thinkingLevel,
readOnly: options.readOnly,
settingSources: options.settingSources,
};

for await (const msg of provider.executeQuery(providerOptions)) {
// Handle error messages
if (msg.type === 'error') {
const errorMessage = msg.error || 'Provider returned an error';
throw new Error(errorMessage);
}

// Extract text from assistant messages
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
}
}
}

// Handle result messages
if (msg.type === 'result') {
if (msg.subtype === 'success') {
// Use result text if longer than accumulated text
if (msg.result && msg.result.length > responseText.length) {
responseText = msg.result;
}
// Capture structured output if present
if (msg.structured_output) {
structuredOutput = msg.structured_output;
}
} else if (msg.subtype === 'error_max_turns') {
// Max turns reached - return what we have
break;
} else if (msg.subtype === 'error_max_structured_output_retries') {
throw new Error('Could not produce valid structured output after retries');
}
}
}

return { text: responseText, structured_output: structuredOutput };
}

/**
* Execute a streaming query with event callbacks
*
* Use this for queries where you need real-time progress updates,
* such as when displaying streaming output to a user.
*
* @example
* ```typescript
* const result = await streamingQuery({
* prompt: 'Analyze this project and suggest improvements',
* cwd: '/path/to/project',
* maxTurns: 250,
* allowedTools: ['Read', 'Glob', 'Grep'],
* onText: (text) => emitProgress(text),
* onToolUse: (tool, input) => emitToolUse(tool, input),
* });
* ```
*/
export async function streamingQuery(options: StreamingQueryOptions): Promise<SimpleQueryResult> {
const model = options.model || DEFAULT_MODEL;
const provider = ProviderFactory.getProviderForModel(model);

let responseText = '';
let structuredOutput: Record<string, unknown> | undefined;

// Build provider options
const providerOptions = {
prompt: options.prompt,
model: model,
cwd: options.cwd,
systemPrompt: options.systemPrompt,
maxTurns: options.maxTurns ?? 250,
allowedTools: options.allowedTools ?? ['Read', 'Glob', 'Grep'],
abortController: options.abortController,
outputFormat: options.outputFormat,
thinkingLevel: options.thinkingLevel,
readOnly: options.readOnly,
settingSources: options.settingSources,
};

for await (const msg of provider.executeQuery(providerOptions)) {
// Handle error messages
if (msg.type === 'error') {
const errorMessage = msg.error || 'Provider returned an error';
throw new Error(errorMessage);
}

// Extract content from assistant messages
if (msg.type === 'assistant' && msg.message?.content) {
for (const block of msg.message.content) {
if (block.type === 'text' && block.text) {
responseText += block.text;
options.onText?.(block.text);
} else if (block.type === 'tool_use' && block.name) {
options.onToolUse?.(block.name, block.input);
} else if (block.type === 'thinking' && block.thinking) {
options.onThinking?.(block.thinking);
}
}
}

// Handle result messages
if (msg.type === 'result') {
if (msg.subtype === 'success') {
// Use result text if longer than accumulated text
if (msg.result && msg.result.length > responseText.length) {
responseText = msg.result;
}
// Capture structured output if present
if (msg.structured_output) {
structuredOutput = msg.structured_output;
}
} else if (msg.subtype === 'error_max_turns') {
// Max turns reached - return what we have
break;
} else if (msg.subtype === 'error_max_structured_output_retries') {
throw new Error('Could not produce valid structured output after retries');
}
}
}

return { text: responseText, structured_output: structuredOutput };
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The simpleQuery and streamingQuery functions share a significant amount of logic, including provider/model resolution, options construction, and stream processing. This duplication can be reduced to improve maintainability.

I suggest creating a single internal _executeQuery function that both public functions can call. This new function would handle the core execution logic, while simpleQuery and streamingQuery would act as wrappers that provide the appropriate default values for maxTurns and allowedTools, reflecting their distinct use cases (simple text generation vs. agentic tasks).

// Internal function to handle query execution and streaming, reducing duplication.
async function _executeQuery(
  options: StreamingQueryOptions,
  defaults: { maxTurns: number; allowedTools: string[] }
): Promise<SimpleQueryResult> {
  const model = options.model || DEFAULT_MODEL;
  const provider = ProviderFactory.getProviderForModel(model);

  let responseText = '';
  let structuredOutput: Record<string, unknown> | undefined;

  const providerOptions = {
    prompt: options.prompt,
    model: model,
    cwd: options.cwd,
    systemPrompt: options.systemPrompt,
    maxTurns: options.maxTurns ?? defaults.maxTurns,
    allowedTools: options.allowedTools ?? defaults.allowedTools,
    abortController: options.abortController,
    outputFormat: options.outputFormat,
    thinkingLevel: options.thinkingLevel,
    readOnly: options.readOnly,
    settingSources: options.settingSources,
  };

  for await (const msg of provider.executeQuery(providerOptions)) {
    // Handle error messages
    if (msg.type === 'error') {
      const errorMessage = msg.error || 'Provider returned an error';
      throw new Error(errorMessage);
    }

    // Extract content from assistant messages
    if (msg.type === 'assistant' && msg.message?.content) {
      for (const block of msg.message.content) {
        if (block.type === 'text' && block.text) {
          responseText += block.text;
          options.onText?.(block.text);
        } else if (block.type === 'tool_use' && block.name) {
          options.onToolUse?.(block.name, block.input);
        } else if (block.type === 'thinking' && block.thinking) {
          options.onThinking?.(block.thinking);
        }
      }
    }

    // Handle result messages
    if (msg.type === 'result') {
      if (msg.subtype === 'success') {
        // Use result text if longer than accumulated text
        if (msg.result && msg.result.length > responseText.length) {
          responseText = msg.result;
        }
        // Capture structured output if present
        if (msg.structured_output) {
          structuredOutput = msg.structured_output;
        }
      } else if (msg.subtype === 'error_max_turns') {
        // Max turns reached - return what we have
        break;
      } else if (msg.subtype === 'error_max_structured_output_retries') {
        throw new Error('Could not produce valid structured output after retries');
      }
    }
  }

  return { text: responseText, structured_output: structuredOutput };
}

/**
 * Execute a simple query and return the text result
 *
 * Use this for simple, non-streaming queries where you just need
 * the final text response. For more complex use cases with progress
 * callbacks, use streamingQuery() instead.
 *
 * @example
 * ```typescript
 * const result = await simpleQuery({
 *   prompt: 'Generate a title for: user authentication',
 *   cwd: process.cwd(),
 *   systemPrompt: 'You are a title generator...',
 *   maxTurns: 1,
 *   allowedTools: [],
 * });
 * console.log(result.text); // "Add user authentication"
 * ```
 */
export async function simpleQuery(options: SimpleQueryOptions): Promise<SimpleQueryResult> {
  return _executeQuery(options, { maxTurns: 1, allowedTools: [] });
}

/**
 * Execute a streaming query with event callbacks
 *
 * Use this for queries where you need real-time progress updates,
 * such as when displaying streaming output to a user.
 *
 * @example
 * ```typescript
 * const result = await streamingQuery({
 *   prompt: 'Analyze this project and suggest improvements',
 *   cwd: '/path/to/project',
 *   maxTurns: 250,
 *   allowedTools: ['Read', 'Glob', 'Grep'],
 *   onText: (text) => emitProgress(text),
 *   onToolUse: (tool, input) => emitToolUse(tool, input),
 * });
 * ```
 */
export async function streamingQuery(options: StreamingQueryOptions): Promise<SimpleQueryResult> {
  return _executeQuery(options, { maxTurns: 250, allowedTools: ['Read', 'Glob', 'Grep'] });
}

- Changed the resolved URL for the @electron/node-gyp module from SSH to HTTPS for improved accessibility and compatibility.
- Modified tests to navigate directly to the dashboard instead of the welcome view, ensuring a smoother project selection process.
- Updated project name verification to check against the sidebar button instead of multiple elements.
- Added logic to expand the sidebar if collapsed, improving visibility for project names during tests.
- Adjusted test assertions to reflect changes in the UI structure, including the introduction of the dashboard view.
- Replaced dynamic import of the query function with a call to the new Simple Query Service for improved clarity and maintainability.
- Streamlined the response handling by directly utilizing the result from the simple query, enhancing code readability.
- Updated the prompt and options structure to align with the new service's requirements, ensuring consistent behavior in learning extraction.
@webdevcody webdevcody merged commit 26da99e into v0.11.0rc Jan 13, 2026
6 checks passed
@webdevcody webdevcody deleted the abstract-anthropic-sdk branch January 13, 2026 04:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant