fix: embed systemPrompt into prompt for CLI-based providers#480
Conversation
CLI-based providers (OpenCode, etc.) only accept a single prompt via stdin/args and don't support separate system/user message channels like Claude SDK. When systemPrompt is passed to these providers, it was silently dropped, causing: - BacklogPlan JSON parsing failures with OpenCode/GPT-5.2 (missing "output ONLY JSON" formatting instruction) - Loss of critical formatting/schema instructions for structured outputs This fix adds embedSystemPromptIntoPrompt() method to CliProvider base class that: - Prepends systemPrompt to the user prompt before CLI execution - Handles both string and array prompts (vision support) - Handles both string systemPrompt and SystemPromptPreset objects - Uses standard \n\n---\n\n separator (consistent with codebase) - Sets systemPrompt to undefined to prevent double-injection Benefits OpencodeProvider immediately (uses base executeQuery). CursorProvider still uses manual workarounds (overrides executeQuery). Fixes the immediate BacklogPlan + OpenCode bug while maintaining backward compatibility with existing Cursor workarounds. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @thesobercoder, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical issue where CLI-based AI providers, such as OpenCode, were silently dropping Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request correctly identifies and addresses an issue where systemPrompt was being dropped by CLI-based providers. The solution to embed the system prompt into the user prompt is well-implemented within the CliProvider base class. My review includes a couple of suggestions to improve the implementation's consistency and readability. Additionally, I recommend adding unit tests for the new embedSystemPromptIntoPrompt method to ensure its logic is robust and to prevent future regressions, especially covering the different types of prompt and systemPrompt it handles.
| if (Array.isArray(options.prompt)) { | ||
| return { | ||
| ...options, | ||
| prompt: [{ type: 'text', text: systemText }, ...options.prompt], | ||
| systemPrompt: undefined, | ||
| }; | ||
| } |
There was a problem hiding this comment.
The handling for array-based prompts is inconsistent with string-based prompts. String prompts include a \n\n---\n\n separator between the system and user prompt, but this is missing when the prompt is an array. This could lead to incorrect behavior if the downstream CLI provider relies on this separator to distinguish system instructions from the user prompt, especially since the PR description explicitly mentions using this separator for consistency.
I suggest updating the logic to ensure the separator is always included, either by prepending to an existing text block or by adding a new one. This will make the behavior consistent across different prompt types.
if (Array.isArray(options.prompt)) {
const newPrompt = [...options.prompt];
if (newPrompt.length > 0 && newPrompt[0].type === 'text' && newPrompt[0].text) {
// Prepend to the first text block for cleaner output and to include the separator.
newPrompt[0].text = `${systemText}\n\n---\n\n${newPrompt[0].text}`;
} else {
// Prepend a new text block if the prompt is empty or starts with a non-text block.
newPrompt.unshift({ type: 'text', text: `${systemText}\n\n---\n\n` });
}
return {
...options,
prompt: newPrompt,
systemPrompt: undefined,
};
}| const systemText = | ||
| typeof options.systemPrompt === 'string' | ||
| ? options.systemPrompt | ||
| : options.systemPrompt.append | ||
| ? options.systemPrompt.append | ||
| : ''; |
There was a problem hiding this comment.
Problem
CLI-based providers (OpenCode, etc.) only accept a single prompt via stdin/args and don't support separate system/user message channels like Claude SDK does. When
systemPromptis passed to these providers, it was silently dropped, causing:This was discovered during Claude API outage when attempting to use GPT-5.2 via OpenCode as a fallback for backlog planning.
Solution
Adds
embedSystemPromptIntoPrompt()method toCliProviderbase class that:systemPromptto the user prompt before CLI executionsystemPromptandSystemPromptPresetobjects\n\n---\n\nseparator (consistent with existing codebase patterns)systemPrompttoundefinedto prevent double-injectionImpact
Immediately benefits:
OpencodeProvider- Uses baseCliProvider.executeQuery(), gets fix automaticallyNot affected (has existing manual workarounds):
CursorProvider- OverridesexecuteQuery(), continues using manual embedding in BacklogPlan, commit messages, etc.Testing
Future Work
A follow-up PR could:
CursorProvider.executeQuery()for consistencyRelated
Fixes the immediate bug reported where BacklogPlan fails with JSON parse errors when using GPT-5.2/OpenCode models.