Skip to content

fix(patch): cherry-pick 68afb72 to release/v0.11.1-pr-12306 [CONFLICTS]#12328

Closed
gemini-cli-robot wants to merge 1 commit intorelease/v0.11.1-pr-12306from
hotfix/v0.11.1/0.11.2/stable/cherry-pick-68afb72/pr-12306
Closed

fix(patch): cherry-pick 68afb72 to release/v0.11.1-pr-12306 [CONFLICTS]#12328
gemini-cli-robot wants to merge 1 commit intorelease/v0.11.1-pr-12306from
hotfix/v0.11.1/0.11.2/stable/cherry-pick-68afb72/pr-12306

Conversation

@gemini-cli-robot
Copy link
Copy Markdown
Collaborator

This PR automatically cherry-picks commit 68afb72 to patch version v0.11.1 in the stable release to create version 0.11.2.

⚠️ Merge Conflicts Detected

This cherry-pick resulted in merge conflicts that need manual resolution.

🔧 Next Steps:

  1. Review the conflicts: Check out this branch and review the conflict markers
  2. Resolve conflicts: Edit the affected files to resolve the conflicts
  3. Test the changes: Ensure the patch works correctly after resolution
  4. Update this PR: Push your conflict resolution

📋 Files with conflicts:

The commit has been created with conflict markers for easier manual resolution.

🚨 Important:

  • Do not merge this PR until conflicts are resolved
  • The automated patch release will trigger once this PR is merged

# Conflicts:
#	packages/core/src/services/chatCompressionService.ts
@gemini-cli-robot gemini-cli-robot requested a review from a team as a code owner October 31, 2025 00:23
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @gemini-cli-robot, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new chat compression service designed to efficiently manage chat history by summarizing older conversations to stay within model token limits. The PR is an automated cherry-pick intended to create a new patch version, but it currently has merge conflicts that must be resolved manually before the changes can be integrated.

Highlights

  • New Chat Compression Service: A new service, ChatCompressionService, has been introduced to intelligently manage and compress chat history, aiming to optimize token usage and prevent exceeding model limits.
  • Token-based Compression Logic: The service incorporates logic to determine when and how to compress chat history, utilizing configurable thresholds like DEFAULT_COMPRESSION_TOKEN_THRESHOLD and COMPRESSION_PRESERVE_THRESHOLD to decide which parts of the conversation to summarize and retain.
  • Merge Conflicts Detected: This pull request is an automated cherry-pick of commit 68afb72 but has resulted in merge conflicts that require manual resolution before it can be merged.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a chat compression service to manage context size. The implementation is well-structured, but there is a significant issue in how the new token count is calculated after compression. It uses a rough character-based estimation which is unreliable and can lead to either failed API calls if underestimated, or failed compressions if overestimated. I've suggested using the countTokens API for an accurate count, which will make the feature much more robust.

Comment on lines +180 to +186
// Estimate token count 1 token ≈ 4 characters
const newTokenCount = Math.floor(
fullNewHistory.reduce(
(total, content) => total + JSON.stringify(content).length,
0,
) / 4,
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation uses a rough character-based estimation for the new token count (1 token ≈ 4 characters). This is unreliable and can lead to incorrect behavior. Specifically:

  1. If the token count is underestimated, the compressed history might still exceed the model's token limit, causing subsequent API calls to fail.
  2. If the token count is overestimated, a successful compression might be incorrectly flagged as a failure (inflated token count), preventing the context from being compressed.

An accurate token counting method is available via config.getContentGenerator().countTokens(). You should use this method to get an accurate newTokenCount before comparing it with originalTokenCount.

    const { totalTokens: newTokenCount } = await config
      .getContentGenerator()
      .countTokens({ model, contents: fullNewHistory });

@mattKorwel mattKorwel closed this Oct 31, 2025
@mattKorwel mattKorwel deleted the hotfix/v0.11.1/0.11.2/stable/cherry-pick-68afb72/pr-12306 branch October 31, 2025 01:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants