fix(patch): cherry-pick 68afb72 to release/v0.12.0-preview.3-pr-12306 to patch version v0.12.0-preview.3 and create version 0.12.0-preview.4#12327
Conversation
Summary of ChangesHello @gemini-cli-robot, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request is an automated cherry-pick to update a preview release. It primarily modifies the chat compression service by lowering the default threshold at which chat history compression is activated. The change is intended to ensure that chat history is compressed earlier, thereby optimizing token usage and managing model context more effectively. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request updates the default chat compression threshold, making it more aggressive by changing it from 70% to 20% of the model's token limit. The constant has been appropriately renamed to DEFAULT_COMPRESSION_TOKEN_THRESHOLD to reflect this. While the implementation in chatCompressionService.ts is correct, this change introduces a critical issue by breaking an existing unit test in chatCompressionService.test.ts. The test case 'should return NOOP if under token threshold and not forced' relies on the previous default value of 0.7 and will fail with the new value of 0.2. The test suite must be updated to pass with this change.
| * token limit. If the chat history exceeds this threshold, it will be compressed. | ||
| */ | ||
| export const COMPRESSION_TOKEN_THRESHOLD = 0.7; | ||
| export const DEFAULT_COMPRESSION_TOKEN_THRESHOLD = 0.2; |
There was a problem hiding this comment.
Changing the default compression threshold from 0.7 to 0.2 breaks an existing unit test. The test case 'should return NOOP if under token threshold and not forced' in packages/core/src/services/chatCompressionService.test.ts is hardcoded to expect the old threshold.
Specifically, the test mocks getLastPromptTokenCount to return 600 and tokenLimit to return 1000. The test comment even states: // Threshold is 0.7 * 1000 = 700. 600 < 700, so NOOP.
With the new default of 0.2, the threshold becomes 200, and 600 is no longer less than the threshold, causing the test to fail. The associated test file needs to be updated to reflect this new default value.
|
Size Change: -49 B (0%) Total Size: 20.2 MB ℹ️ View Unchanged
|
66b61a1
into
release/v0.12.0-preview.3-pr-12306
This PR automatically cherry-picks commit 68afb72 to patch version v0.12.0-preview.3 in the preview release to create version 0.12.0-preview.4.