0

I am using below code to get data from "Mistral-Nemo" model hosted as SaaS in Azure AI Foundry. It gives me below error:

Error: Operation returned an invalid status 'Bad Request'
Traceback (most recent call last):
  File "c:\Users\guptswapmax\TestFalcon\TestFalcon.py", line 32, in <module>
        response = client.complete(
                   ~~~~~~~~~~~~~~~^
        messages=[
        ^^^^^^^^^^
    ...<5 lines>...
        model=model_name
        ^^^^^^^^^^^^^^^^
    )
    ^
  File "c:\Users\abc\TestFalcon\.venv\Lib\site-packages\azure\ai\inference\_patch.py", line 738, in complete
    raise HttpResponseError(response=response)
azure.core.exceptions.HttpResponseError: Operation returned an invalid status 'Bad Request'

I have looked into the permissions and make sure my SP has required permissions and it also has "Coginitive Support User" and "Cognitive User Contributor" role and trying to implement using Service Principal

When i use the same code with Azure API Key and AzureKeyCredential it works. https://github.com/MicrosoftDocs/azure-ai-docs/blob/main/articles/ai-foundry/model-inference/includes/use-chat-completions/csharp.md

import os
from azure.ai.inference import ChatCompletionsClient
from azure.ai.inference.models import SystemMessage, UserMessage
from azure.identity import DefaultAzureCredential, ClientSecretCredential
import logging
from azure.core.pipeline.policies import HttpLoggingPolicy

endpoint = "https://abcdomain.services.ai.azure.com/models"
model_name  = "Mistral-Nemo"

Client_Id = "xxxxxxxxxxxxxxxxxxxxxxxx"
Client_Secret = "xxxxxxxxxxxxxxxxxxx"
Tenant_Id = "xxxxxxxxxxxxxxxxxxxxxxxxx"    

cred = ClientSecretCredential(
    tenant_id=Tenant_Id,
    client_id=Client_Id,
    client_secret=Client_Secret    
)

client = ChatCompletionsClient(
        endpoint=endpoint,
        credential=cred,
        credential_scopes=["https://cognitiveservices.azure.com/.default"]
    )

try:
   result = client.complete(
    messages=[
        SystemMessage(content="You are a helpful assistant."),
        UserMessage(content="How many languages are in the world?"),
    ],
    temperature=0.8,
    top_p=0.1,
    max_tokens=2048,
    stream=True,
    model=model_name
)
except Exception as e:
    print("Error:", e)
7
  • Since you are using a Service Principal, shouldn't you be using ClientSecretCredential? Commented May 8 at 19:51
  • I tried using ClientSecretCredential but getting "Bad Request: , endpoint and model name seems to be correct ? What else i could verify ? Commented May 8 at 21:16
  • Can you share the exact error returned by the service? Commented May 8 at 22:43
  • Added error above Commented May 9 at 0:45
  • error shows model=model_name but you show code with model=model_deployment_name, - maybe you run wrong code. Maybe you run code with some mistake. Commented May 9 at 1:29

1 Answer 1

0

File "c:\Users\abc\TestFalcon.venv\Lib\site-packages\azure\ai\inference_patch.py", line 738, in complete raise HttpResponseError(response=response) azure.core.exceptions.HttpResponseError: Operation returned an invalid status 'Bad Request'

The error HttpResponseError: Operation returned an invalid status 'Bad Request' usually indicates that the request payload is malformed or that there is a mismatch between the model's expectations and the API client usage.

Here is the document that can be deployed based upon their models.

In my environment I created app and added necessary RBAC role to the azure AI service.

Portal:

enter image description here

You can use the below code which will used to get data from your model using python.

Code:

import requests
from azure.identity import ClientSecretCredential

# Get token
tenant_id = "xxxxx"
client_id = "xxxx"
client_secret = "xxxxxx"

credential = ClientSecretCredential(tenant_id, client_id, client_secret)
token = credential.get_token("https://cognitiveservices.azure.com/.default").token

endpoint = "https://xxxxxxx.services.ai.azure.com"
api_version = "2024-12-01-preview"
model = "xxxxxx"

url = f"{endpoint}/openai/deployments/{model}/chat/completions?api-version={api_version}"

headers = {
    "Authorization": f"Bearer {token}",
    "Content-Type": "application/json"
}

data = {
    "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "How many languages are in the world?"}
    ],
    "temperature":0.8,
    "top_p":0.1,
    "max_tokens":2048,
    "stream":True
}


response = requests.post(url, headers=headers, json=data)
response_json = response.json()
content = response_json["choices"][0]["message"]["content"]
print(content)

Output:

Estimates vary depending on how you count distinct “languages” versus dialects, but the most frequently cited figure is that there are roughly 7,000 living human languages in the world today.  Key points:  
1. The Ethnologue, a widely used catalog published by SIL International, lists about 7,100 living languages (7,151 in its 2023 edition).  
2. UNESCO and other bodies often give a rounded figure of “6,000–7,000,” reflecting different classification criteria.  
3. Nearly 40 percent of those languages are considered endangered, meaning they risk falling out of use as their speakers shift to more dominant tongues.  
4. In addition to the living languages, several thousand more have gone extinct or survive only in written records.  
So while there’s no single “official” count, it’s safe to say the number of living languages worldwide hovers around seven thousand.

enter image description here

Reference:

azure-sdk-for-python/sdk/ai/azure-ai-inference/samples at azure-ai-inference_1.0.0b9 · Azure/azure-sdk-for-python

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.