fix(patch): cherry-pick 68afb72 to release/v0.11.1-pr-12306 [CONFLICTS]#12328
Conversation
# Conflicts: # packages/core/src/services/chatCompressionService.ts
Summary of ChangesHello @gemini-cli-robot, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new chat compression service designed to efficiently manage chat history by summarizing older conversations to stay within model token limits. The PR is an automated cherry-pick intended to create a new patch version, but it currently has merge conflicts that must be resolved manually before the changes can be integrated. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a chat compression service to manage context size. The implementation is well-structured, but there is a significant issue in how the new token count is calculated after compression. It uses a rough character-based estimation which is unreliable and can lead to either failed API calls if underestimated, or failed compressions if overestimated. I've suggested using the countTokens API for an accurate count, which will make the feature much more robust.
| // Estimate token count 1 token ≈ 4 characters | ||
| const newTokenCount = Math.floor( | ||
| fullNewHistory.reduce( | ||
| (total, content) => total + JSON.stringify(content).length, | ||
| 0, | ||
| ) / 4, | ||
| ); |
There was a problem hiding this comment.
The current implementation uses a rough character-based estimation for the new token count (1 token ≈ 4 characters). This is unreliable and can lead to incorrect behavior. Specifically:
- If the token count is underestimated, the compressed history might still exceed the model's token limit, causing subsequent API calls to fail.
- If the token count is overestimated, a successful compression might be incorrectly flagged as a failure (inflated token count), preventing the context from being compressed.
An accurate token counting method is available via config.getContentGenerator().countTokens(). You should use this method to get an accurate newTokenCount before comparing it with originalTokenCount.
const { totalTokens: newTokenCount } = await config
.getContentGenerator()
.countTokens({ model, contents: fullNewHistory });
This PR automatically cherry-picks commit 68afb72 to patch version v0.11.1 in the stable release to create version 0.11.2.
This cherry-pick resulted in merge conflicts that need manual resolution.
🔧 Next Steps:
📋 Files with conflicts:
The commit has been created with conflict markers for easier manual resolution.
🚨 Important: