Skip to content

Fix tool calling detection and support for Ollama models#3566

Merged
lramos15 merged 2 commits intomicrosoft:mainfrom
lsby:fix-ollama-models-agent-tool-calling
Feb 11, 2026
Merged

Fix tool calling detection and support for Ollama models#3566
lramos15 merged 2 commits intomicrosoft:mainfrom
lsby:fix-ollama-models-agent-tool-calling

Conversation

@lsby
Copy link
Contributor

@lsby lsby commented Feb 9, 2026

Summary

This PR fixes issues where Ollama models could not be used in Agent mode because tool calling capabilities were not correctly identified or propagated.

Environment

Ollama

  • Version: 0.15.5
  • Model: mistral-small3.2:latest (supports vision and tool calling)

VSCode

  • Version: 1.110.0-insider (user setup)
  • Commit: c308cc9f87e1427a73c5e32c81a1cfe9b1b203d1

Changes

  • Pass knownModels instead of models to byokKnownModelsToAPIInfo
  • Initialize _knownModels in OllamaLMProvider
  • Ensure supportsToolCalls is correctly derived for Ollama models

Ollama models cannot be used in Agent mode

Issue 1: Ollama models are not correctly recognized as supporting tool calls

Ollama models are not correctly identified as supporting the "tools" capability.

QQ20260209-070939

This happens because getAllModels passes models to byokKnownModelsToAPIInfo.

However, based on the expected input of byokKnownModelsToAPIInfo, knownModels should be passed instead. After applying this change, the model information is correctly populated.

QQ20260209-070136

Issue 2: The tools field is not sent to the Ollama model

When invoking an Ollama model, the tools section is not included in the request.

This happens because the following code checks supportsToolCalls. If it is false, the tools field is removed.

The value of supportsToolCalls ultimately comes from this code.
knownModelInfo is passed from an upper layer, which originates from this._knownModels.

However, _knownModels is never initialized in OllamaLMProvider. As a result, it falls back to the base class AbstractOpenAICompatibleLMProvider.

Meanwhile, OllamaLMProvider passes undefined to super, which causes this._knownModels to remain undefined.

_knownModels should be initialized during getAllModels execution to ensure tool capability is propagated correctly.

@rajasrijan
Copy link

Facing the same issue. Is there a work around i can use till the fix is propagated to release?

@lsby
Copy link
Contributor Author

lsby commented Feb 10, 2026

@rajasrijan

There is a way to temporarily fix this issue:

  • Locate your VSCode extensions directory. For the official VSCode version, it is located at: C:\Users\<your username>\.vscode\extensions
  • Back up the file C:\Users\<your username>\.vscode\extensions\github.copilot-chat-0.37.2\dist\extension.js
  • Open this file
  • Search for "getAllModels", find the last result, and locate this section:
238b2260ceca13a2349e3dcdcd1cb6c0
async getAllModels(t,r,a){if(!a)return[];let o=a.url;try{await this._checkOllamaVersion(o);let c=(await(await this._fetcherService.fetch(`${o}/api/tags`,{method:"GET"})).json()).models,l={};for(let d of c){let u=this._modelCache.get(`${o}/${d.model}`);u||(u=await this._getOllamaModelInfo(o,d.model),this._modelCache.set(`${o}/${d.model}`,u)),l[d.model]={maxInputTokens:u.capabilities.limits?.max_prompt_tokens??4096,maxOutputTokens:u.capabilities.limits?.max_output_tokens??4096,name:u.name,toolCalling:!!u.capabilities.supports.tool_calls,vision:!!u.capabilities.supports.vision}}return kP(this._name,c).map(d=>({...d,url:o}))}catch(s){throw s instanceof Error&&s.message.includes("Ollama server version")?s:new Error('Failed to fetch models from Ollama. Please ensure Ollama is running. If ollama is on another host, please configure the `"github.copilot.chat.byok.ollamaEndpoint"` setting.')}}
  • Replace it with:
4b74f30a0ae76f85ec533f53f2ce8bb7
async getAllModels(t,r,a){if(!a)return[];let o=a.url;try{await this._checkOllamaVersion(o);let c=(await(await this._fetcherService.fetch(`${o}/api/tags`,{method:"GET"})).json()).models;this._knownModels={};for(let d of c){let u=this._modelCache.get(`${o}/${d.model}`);u||(u=await this._getOllamaModelInfo(o,d.model),this._modelCache.set(`${o}/${d.model}`,u)),this._knownModels[u.id]={maxInputTokens:u.capabilities.limits?.max_prompt_tokens??4096,maxOutputTokens:u.capabilities.limits?.max_output_tokens??4096,name:u.name,toolCalling:!!u.capabilities.supports.tool_calls,vision:!!u.capabilities.supports.vision}};return kP(this._name,this._knownModels).map(d=>({...d,url:o}))}catch(s){throw s instanceof Error&&s.message.includes("Ollama server version")?s:new Error('Failed to fetch models from Ollama. Please ensure Ollama is running. If ollama is on another host, please configure the `"github.copilot.chat.byok.ollamaEndpoint"` setting.')}}

Differences in the modified section:
7774fd42be1571e71a4bf3cc5c38c071

  • Save the file and restart VS Code.

@kycutler kycutler assigned lramos15 and unassigned kycutler Feb 10, 2026
@vs-code-engineering vs-code-engineering bot added this to the February 2026 milestone Feb 10, 2026
@rajasrijan
Copy link

@rajasrijan

There is a way to temporarily fix this issue:

* Locate your VSCode extensions directory. For the official VSCode version, it is located at: `C:\Users\<your username>\.vscode\extensions`

* Back up the file `C:\Users\<your username>\.vscode\extensions\github.copilot-chat-0.37.2\dist\extension.js`

* Open this file

* Search for "getAllModels", find the last result, and locate this section:
238b2260ceca13a2349e3dcdcd1cb6c0
async getAllModels(t,r,a){if(!a)return[];let o=a.url;try{await this._checkOllamaVersion(o);let c=(await(await this._fetcherService.fetch(`${o}/api/tags`,{method:"GET"})).json()).models,l={};for(let d of c){let u=this._modelCache.get(`${o}/${d.model}`);u||(u=await this._getOllamaModelInfo(o,d.model),this._modelCache.set(`${o}/${d.model}`,u)),l[d.model]={maxInputTokens:u.capabilities.limits?.max_prompt_tokens??4096,maxOutputTokens:u.capabilities.limits?.max_output_tokens??4096,name:u.name,toolCalling:!!u.capabilities.supports.tool_calls,vision:!!u.capabilities.supports.vision}}return kP(this._name,c).map(d=>({...d,url:o}))}catch(s){throw s instanceof Error&&s.message.includes("Ollama server version")?s:new Error('Failed to fetch models from Ollama. Please ensure Ollama is running. If ollama is on another host, please configure the `"github.copilot.chat.byok.ollamaEndpoint"` setting.')}}
* Replace it with:
4b74f30a0ae76f85ec533f53f2ce8bb7
async getAllModels(t,r,a){if(!a)return[];let o=a.url;try{await this._checkOllamaVersion(o);let c=(await(await this._fetcherService.fetch(`${o}/api/tags`,{method:"GET"})).json()).models;this._knownModels={};for(let d of c){let u=this._modelCache.get(`${o}/${d.model}`);u||(u=await this._getOllamaModelInfo(o,d.model),this._modelCache.set(`${o}/${d.model}`,u)),this._knownModels[u.id]={maxInputTokens:u.capabilities.limits?.max_prompt_tokens??4096,maxOutputTokens:u.capabilities.limits?.max_output_tokens??4096,name:u.name,toolCalling:!!u.capabilities.supports.tool_calls,vision:!!u.capabilities.supports.vision}};return kP(this._name,this._knownModels).map(d=>({...d,url:o}))}catch(s){throw s instanceof Error&&s.message.includes("Ollama server version")?s:new Error('Failed to fetch models from Ollama. Please ensure Ollama is running. If ollama is on another host, please configure the `"github.copilot.chat.byok.ollamaEndpoint"` setting.')}}

Differences in the modified section: 7774fd42be1571e71a4bf3cc5c38c071

* Save the file and restart VS Code.

That didn't work. I can see the models now with the capabilities. but cant use them anymore.

Sorry, your request failed. Please try again.

Copilot Request id: a1c18984-5dd7-450b-a333-e5a85639f7ad

Reason: 404 page not found: Error: 404 page not found at x3._provideLanguageModelResponse (/home/srijan/.vscode/extensions/github.copilot-chat-0.37.2/dist/extension.js:1409:11576) at process.processTicksAndRejections (node:internal/process/task_queues:105:5)

@lramos15 lramos15 added this pull request to the merge queue Feb 11, 2026
Merged via the queue into microsoft:main with commit b879fbf Feb 11, 2026
9 checks passed
@lsby lsby deleted the fix-ollama-models-agent-tool-calling branch February 11, 2026 18:49
@wrenchpilot
Copy link

Tested PR in VS Code Insiders. Same as @rajasrijan, I can see the models but none of them work.

image

@lsby lsby restored the fix-ollama-models-agent-tool-calling branch February 11, 2026 19:16
@lsby
Copy link
Contributor Author

lsby commented Feb 11, 2026

@wrenchpilot @rajasrijan

I confirmed the issue. Previously I was previously using a self-implemented Ollama service, so I didn’t notice this problem.
The issue is that Ollama’s endpoint is not /chat/completions, but /api/chat.
I will fix this issue.

image

@lsby lsby deleted the fix-ollama-models-agent-tool-calling branch February 11, 2026 20:19
@lsby
Copy link
Contributor Author

lsby commented Feb 12, 2026

@wrenchpilot @rajasrijan

The 404 issue was not caused by this PR, so I submitted a new one that should resolve it.

#3684

@Bielecki
Copy link

Bielecki commented Feb 21, 2026

Should this be fixed? I'm running v0.37.8 with vscode 1.109.5 and I'm still experiencing this issue

EDIT: nevermind, sorry. I have manually removed extension files from .vscode/extensions and redownloaded it. It still returns 404, but models are visible.

@AryanKarumuri
Copy link

I was still facing the same issue. The models were visible in ask mode but not in "plan" and "agent" mode.

Ask Mode:

image

Agent Mode:

image

@mrmimo
Copy link

mrmimo commented Feb 27, 2026

Is it fixed? In 0.37.9 I'm still experiencing the issue. Even deleted and redownloaded the extension.
ollama version is 0.17.4

@SeeingBlue
Copy link

I think it got worse. It is showing less models under ask than before and nothing under agent still.
image
image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants