Skip to content

copilot_chat: Return true context window size#47557

Merged
benbrandt merged 3 commits intozed-industries:mainfrom
anilpai:44909
Feb 12, 2026
Merged

copilot_chat: Return true context window size#47557
benbrandt merged 3 commits intozed-industries:mainfrom
anilpai:44909

Conversation

@anilpai
Copy link
Contributor

@anilpai anilpai commented Jan 24, 2026

Fix incorrect context size limits for GitHub Copilot Chat models

Fixes #44909

Problem

The agent panel was displaying incorrect token limits for GitHub Copilot models. Users reported that:

  • The agent panel always showed a 128K token limit for all GitHub Copilot models, regardless of their actual context window size
  • Claude models (e.g., Claude 3.7 Sonnet, Claude Opus 4.5) were showing ~90K instead of their actual 200K context window
  • GPT-4o was showing 110K instead of its actual 128K context window
  • Users could continue using models beyond the displayed limit, which worked but was confusing

Root Cause

The max_token_count() method in copilot_chat.rs was returning max_prompt_tokens instead of max_context_window_tokens:

// Before (incorrect)
pub fn max_token_count(&self) -> u64 {
    self.capabilities.limits.max_prompt_tokens
}

GitHub's API returns three different token-related fields:

  • max_context_window_tokens: The full context window size (e.g., 200K for Claude 3.7)
  • max_prompt_tokens: GitHub's limit for prompt input (e.g., 90K for Claude 3.7)
  • max_output_tokens: Maximum output tokens (e.g., 16K)

The max_token_count() method in the LanguageModel trait is expected to return the full context window size — this is consistent with all other providers (Anthropic returns 200K for Claude, OpenAI returns 128K for GPT-4o, etc.).

Solution

Screenshot 2026-01-25 at 1 07 53 AM

Changed max_token_count() to return max_context_window_tokens:

// After (correct)
pub fn max_token_count(&self) -> u64 {
    self.capabilities.limits.max_context_window_tokens as u64
}

Impact

Model Before After
Claude 3.7 Sonnet 90,000 200,000
Claude Opus 4.5 90,000 200,000
GPT-4o 110,000 128,000

Testing

Added a new test test_max_token_count_returns_context_window_not_prompt_tokens that:

  1. Deserializes model JSON with distinct max_context_window_tokens and max_prompt_tokens values
  2. Verifies Claude 3.7 Sonnet returns 200,000 (context window), not 90,000 (prompt tokens)
  3. Verifies GPT-4o returns 128,000 (context window), not 110,000 (prompt tokens)

All existing tests continue to pass:

running 4 tests
test tests::test_unknown_vendor_resilience ... ok
test tests::test_max_token_count_returns_context_window_not_prompt_tokens ... ok
test tests::test_resilient_model_schema_deserialize ... ok
test result: ok. 4 passed; 0 failed

Release Notes:

  • copilot: Fixed incorrect context window size displayed for GitHub Copilot Chat models in the agent panel.

The `max_token_count()` method was incorrectly returning `max_prompt_tokens`
instead of `max_context_window_tokens`. This caused GitHub Copilot Chat models
to report incorrect context sizes (e.g., Claude 3.7 reported 90K instead of 200K).

This change aligns the behavior with other providers (Anthropic, OpenAI, etc.)
that return the full context window size from `max_token_count()`.
@cla-bot cla-bot bot added the cla-signed The user has signed the Contributor License Agreement label Jan 24, 2026
@zed-community-bot zed-community-bot bot added the first contribution the author's first pull request to Zed. NOTE: the label application is automated via github actions label Jan 24, 2026
@SomeoneToIgnore SomeoneToIgnore added the area:ai Improvement related to Agent Panel, Edit Prediction, Copilot, or other AI features label Jan 25, 2026
@anilpai
Copy link
Contributor Author

anilpai commented Feb 2, 2026

This is a minor fix. @SomeoneToIgnore :shipit:

@yeskunall yeskunall changed the title copilot_chat: Fix max_token_count to return context window size copilot_chat: Return true context window size Feb 6, 2026
@anilpai
Copy link
Contributor Author

anilpai commented Feb 6, 2026

@yeskunall Added Release Notes.

@benbrandt benbrandt self-assigned this Feb 12, 2026
@benbrandt benbrandt enabled auto-merge (squash) February 12, 2026 14:52
@benbrandt benbrandt merged commit 13ad175 into zed-industries:main Feb 12, 2026
28 checks passed
@polRk
Copy link

polRk commented Mar 7, 2026

For gpt-5.4 show wrong context size: 400k, insted of 1M

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:ai Improvement related to Agent Panel, Edit Prediction, Copilot, or other AI features cla-signed The user has signed the Contributor License Agreement first contribution the author's first pull request to Zed. NOTE: the label application is automated via github actions

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Incorrectly reported token limits in agent panel on many GitHub Copilot models

4 participants