Skip to content

fix: embed systemPrompt into prompt for CLI-based providers#480

Merged
webdevcody merged 1 commit intoAutoMaker-Org:v0.11.0rcfrom
thesobercoder:fix/cli-provider-system-prompt
Jan 14, 2026
Merged

fix: embed systemPrompt into prompt for CLI-based providers#480
webdevcody merged 1 commit intoAutoMaker-Org:v0.11.0rcfrom
thesobercoder:fix/cli-provider-system-prompt

Conversation

@thesobercoder
Copy link
Copy Markdown
Contributor

Problem

CLI-based providers (OpenCode, etc.) only accept a single prompt via stdin/args and don't support separate system/user message channels like Claude SDK does. When systemPrompt is passed to these providers, it was silently dropped, causing:

  • BacklogPlan JSON parsing failures with OpenCode/GPT-5.2 models (missing "output ONLY JSON" formatting instruction)
  • Loss of critical formatting/schema instructions for structured outputs

This was discovered during Claude API outage when attempting to use GPT-5.2 via OpenCode as a fallback for backlog planning.

Solution

Adds embedSystemPromptIntoPrompt() method to CliProvider base class that:

  • Prepends systemPrompt to the user prompt before CLI execution
  • Handles both string and array prompts (vision support)
  • Handles both string systemPrompt and SystemPromptPreset objects
  • Uses standard \n\n---\n\n separator (consistent with existing codebase patterns)
  • Sets systemPrompt to undefined to prevent double-injection

Impact

Immediately benefits:

  • OpencodeProvider - Uses base CliProvider.executeQuery(), gets fix automatically
  • ✅ BacklogPlan with OpenCode/GPT-5.2 models now works correctly

Not affected (has existing manual workarounds):

  • CursorProvider - Overrides executeQuery(), continues using manual embedding in BacklogPlan, commit messages, etc.

Testing

  • Verified no double-embedding risk with existing Cursor workarounds
  • Separator pattern matches existing codebase usage
  • Handles all prompt/systemPrompt type combinations

Future Work

A follow-up PR could:

  1. Apply the same fix to CursorProvider.executeQuery() for consistency
  2. Remove manual workarounds from BacklogPlan, generate-commit-message, etc.

Related

Fixes the immediate bug reported where BacklogPlan fails with JSON parse errors when using GPT-5.2/OpenCode models.

CLI-based providers (OpenCode, etc.) only accept a single prompt via
stdin/args and don't support separate system/user message channels like
Claude SDK. When systemPrompt is passed to these providers, it was
silently dropped, causing:

- BacklogPlan JSON parsing failures with OpenCode/GPT-5.2 (missing
  "output ONLY JSON" formatting instruction)
- Loss of critical formatting/schema instructions for structured outputs

This fix adds embedSystemPromptIntoPrompt() method to CliProvider base
class that:
- Prepends systemPrompt to the user prompt before CLI execution
- Handles both string and array prompts (vision support)
- Handles both string systemPrompt and SystemPromptPreset objects
- Uses standard \n\n---\n\n separator (consistent with codebase)
- Sets systemPrompt to undefined to prevent double-injection

Benefits OpencodeProvider immediately (uses base executeQuery).
CursorProvider still uses manual workarounds (overrides executeQuery).

Fixes the immediate BacklogPlan + OpenCode bug while maintaining
backward compatibility with existing Cursor workarounds.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jan 14, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @thesobercoder, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue where CLI-based AI providers, such as OpenCode, were silently dropping systemPrompt instructions. This omission led to functional failures, specifically JSON parsing errors in tools like BacklogPlan, due to the absence of crucial formatting guidelines. The solution introduces a new method within the base CliProvider to intelligently embed the systemPrompt directly into the user prompt, ensuring that all necessary instructions are conveyed to the model. This change significantly improves the reliability of CLI-based providers for tasks requiring structured outputs.

Highlights

  • System Prompt Embedding: Introduced a new protected method, embedSystemPromptIntoPrompt(), within the CliProvider base class to correctly handle systemPrompt for CLI-based AI providers that do not natively support separate system messages.
  • Prompt Prepending Logic: The embedSystemPromptIntoPrompt() method prepends the systemPrompt content to the main user prompt, using a \n\n---\n\n separator, ensuring critical instructions (like JSON formatting) are always passed to the model.
  • Versatile Prompt Handling: The embedding logic supports both string and array prompt types for the user prompt, and can handle systemPrompt as either a string or a SystemPromptPreset object (embedding its append property if available).
  • Bug Fix and Impact: This change resolves BacklogPlan JSON parsing failures when using OpenCode/GPT-5.2 models by ensuring that system-level formatting instructions are correctly conveyed, which was previously silently dropped.
  • Prevention of Double Injection: After embedding, the systemPrompt field is explicitly set to undefined in the ExecuteOptions to prevent any unintended double-injection by subclasses or subsequent processing steps.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly identifies and addresses an issue where systemPrompt was being dropped by CLI-based providers. The solution to embed the system prompt into the user prompt is well-implemented within the CliProvider base class. My review includes a couple of suggestions to improve the implementation's consistency and readability. Additionally, I recommend adding unit tests for the new embedSystemPromptIntoPrompt method to ensure its logic is robust and to prevent future regressions, especially covering the different types of prompt and systemPrompt it handles.

Comment on lines +600 to +606
if (Array.isArray(options.prompt)) {
return {
...options,
prompt: [{ type: 'text', text: systemText }, ...options.prompt],
systemPrompt: undefined,
};
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The handling for array-based prompts is inconsistent with string-based prompts. String prompts include a \n\n---\n\n separator between the system and user prompt, but this is missing when the prompt is an array. This could lead to incorrect behavior if the downstream CLI provider relies on this separator to distinguish system instructions from the user prompt, especially since the PR description explicitly mentions using this separator for consistency.

I suggest updating the logic to ensure the separator is always included, either by prepending to an existing text block or by adding a new one. This will make the behavior consistent across different prompt types.

    if (Array.isArray(options.prompt)) {
      const newPrompt = [...options.prompt];
      if (newPrompt.length > 0 && newPrompt[0].type === 'text' && newPrompt[0].text) {
        // Prepend to the first text block for cleaner output and to include the separator.
        newPrompt[0].text = `${systemText}\n\n---\n\n${newPrompt[0].text}`;
      } else {
        // Prepend a new text block if the prompt is empty or starts with a non-text block.
        newPrompt.unshift({ type: 'text', text: `${systemText}\n\n---\n\n` });
      }
      return {
        ...options,
        prompt: newPrompt,
        systemPrompt: undefined,
      };
    }

Comment on lines +580 to +585
const systemText =
typeof options.systemPrompt === 'string'
? options.systemPrompt
: options.systemPrompt.append
? options.systemPrompt.append
: '';
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to extract systemText can be simplified by using the nullish coalescing operator (??). This improves readability and makes the code more concise.

    const systemText =
      typeof options.systemPrompt === 'string'
        ? options.systemPrompt
        : options.systemPrompt.append ?? '';

@webdevcody webdevcody merged commit 7108cdd into AutoMaker-Org:v0.11.0rc Jan 14, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants