1

I am trying to set up GitHub Copilot Chat with an Azure hosted OpenAI model.

But I am struggling to find an authorative answer on what to enter for the endpoint.

Some blog posts suggest: https://xxx.openai.azure.com/openai/deployments/gpt-5-mini/chat/completions?api-version=2025-04-01-preview

some to just copy the Target URL from the deployment pane of Azure AI Foundry https://xxx.openai.azure.com/openai/responses?api-version=2025-04-01-preview

With all conceivable combinations I am getting:

Sorry, your request failed. Please try again. Request id: 64c2927b-4108-424a-9241-82485214d597 Reason: Resource not found

My complete config looks like this (with varying urls):

  "github.copilot.chat.azureModels": {
    "gpt-5-mini": {
      "name": "gpt-5-mini",
      "url": "https://xxx.openai.azure.com/openai/deployments/gpt-5-mini/chat/completions?api-version=2025-04-01-preview",
      "toolCalling": true,
      "vision": true,
      "thinking": true,
      "maxInputTokens": 272000,
      "maxOutputTokens": 128000,
    }
  },

Current issues on GitHub for VS Code may also suggest that something else is broken with this extension at the moment, but I am not getting the same error that most people report. So I might already be stuck earlier.

1 Answer 1

0

First let's start with the basics, check that you have the latest component versions, because there are continuous changes both into VSCode and GitHub Copilot extension.

  1. VSCode >= 1.105.1

  2. GitHub Copilot >= 1.388.0

  3. GitHub Copilot Chat >= 0.32.4

I think your configuration is using the model name and not the deployment name, try using the deployment name you gave into Azure Foundry

https://xxx.openai.azure.com/openai/deployments/<deployment_name>/chat/completions?api-version=<api_version>

After you define this properly into global settings.json, just for confirmation that you already have done this, you should click into GitHub Copilot Chat -> Manage Models -> Select Azure and enter the API Key.

enter image description here

Also make sure that the model has the check box enabled and click the blue OK button in the right of the popup. Then the model should be visible in the model list.
enter image description here

Take a note that currently from Azure Foundry only Chat Completions are supported while:

  1. models under responses api are not supported like GPT-5-Codex, the popup will not open to put the api key.

  2. models with different temperature rather 1 are not supported like GPT-5-Mini and GPT-5, sending a message will result in temperature error.

So if your intention is to use gpt-5-mini, I am expecting a temperature error after the error for resource is not found is solved.

Regarding these limitations I have proposed to be supported and also implemented the changes, but not sure if they will accept them soon. You can take a look and vote for it too.
https://github.com/microsoft/vscode-copilot-chat/pull/1763

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks for the hint on the temperature. Worked as described with gpt-4.1-nano.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.