Skip to content

fix(patch): cherry-pick 68afb72 to release/v0.12.0-preview.3-pr-12306 to patch version v0.12.0-preview.3 and create version 0.12.0-preview.4#12327

Merged
mattKorwel merged 1 commit intorelease/v0.12.0-preview.3-pr-12306from
hotfix/v0.12.0-preview.3/0.12.0-preview.4/preview/cherry-pick-68afb72/pr-12306
Oct 30, 2025
Merged

fix(patch): cherry-pick 68afb72 to release/v0.12.0-preview.3-pr-12306 to patch version v0.12.0-preview.3 and create version 0.12.0-preview.4#12327
mattKorwel merged 1 commit intorelease/v0.12.0-preview.3-pr-12306from
hotfix/v0.12.0-preview.3/0.12.0-preview.4/preview/cherry-pick-68afb72/pr-12306

Conversation

@gemini-cli-robot
Copy link
Copy Markdown
Collaborator

This PR automatically cherry-picks commit 68afb72 to patch version v0.12.0-preview.3 in the preview release to create version 0.12.0-preview.4.

@gemini-cli-robot gemini-cli-robot requested a review from a team as a code owner October 30, 2025 23:26
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @gemini-cli-robot, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request is an automated cherry-pick to update a preview release. It primarily modifies the chat compression service by lowering the default threshold at which chat history compression is activated. The change is intended to ensure that chat history is compressed earlier, thereby optimizing token usage and managing model context more effectively.

Highlights

  • Default Compression Threshold Adjustment: The default token compression threshold has been significantly reduced from 0.7 (70%) to 0.2 (20%) of the model's token limit, meaning chat history will be compressed much earlier by default.
  • Constant Renaming: The constant COMPRESSION_TOKEN_THRESHOLD has been renamed to DEFAULT_COMPRESSION_TOKEN_THRESHOLD to more accurately reflect its role as a fallback value.
  • Improved Token Management: This change aims to improve token management by triggering chat history compression sooner, potentially leading to more efficient use of model context.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the default chat compression threshold, making it more aggressive by changing it from 70% to 20% of the model's token limit. The constant has been appropriately renamed to DEFAULT_COMPRESSION_TOKEN_THRESHOLD to reflect this. While the implementation in chatCompressionService.ts is correct, this change introduces a critical issue by breaking an existing unit test in chatCompressionService.test.ts. The test case 'should return NOOP if under token threshold and not forced' relies on the previous default value of 0.7 and will fail with the new value of 0.2. The test suite must be updated to pass with this change.

* token limit. If the chat history exceeds this threshold, it will be compressed.
*/
export const COMPRESSION_TOKEN_THRESHOLD = 0.7;
export const DEFAULT_COMPRESSION_TOKEN_THRESHOLD = 0.2;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Changing the default compression threshold from 0.7 to 0.2 breaks an existing unit test. The test case 'should return NOOP if under token threshold and not forced' in packages/core/src/services/chatCompressionService.test.ts is hardcoded to expect the old threshold.

Specifically, the test mocks getLastPromptTokenCount to return 600 and tokenLimit to return 1000. The test comment even states: // Threshold is 0.7 * 1000 = 700. 600 < 700, so NOOP.

With the new default of 0.2, the threshold becomes 200, and 600 is no longer less than the threshold, causing the test to fail. The associated test file needs to be updated to reflect this new default value.

@github-actions
Copy link
Copy Markdown

Size Change: -49 B (0%)

Total Size: 20.2 MB

ℹ️ View Unchanged
Filename Size Change
./bundle/gemini.js 20.2 MB -49 B (0%)
./bundle/sandbox-macos-permissive-closed.sb 1.03 kB 0 B
./bundle/sandbox-macos-permissive-open.sb 890 B 0 B
./bundle/sandbox-macos-permissive-proxied.sb 1.31 kB 0 B
./bundle/sandbox-macos-restrictive-closed.sb 3.29 kB 0 B
./bundle/sandbox-macos-restrictive-open.sb 3.36 kB 0 B
./bundle/sandbox-macos-restrictive-proxied.sb 3.56 kB 0 B

compressed-size-action

@mattKorwel mattKorwel enabled auto-merge (squash) October 30, 2025 23:30
@mattKorwel mattKorwel merged commit 66b61a1 into release/v0.12.0-preview.3-pr-12306 Oct 30, 2025
21 checks passed
@mattKorwel mattKorwel deleted the hotfix/v0.12.0-preview.3/0.12.0-preview.4/preview/cherry-pick-68afb72/pr-12306 branch October 30, 2025 23:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants