I have deployed chat model on Azure OpenAI Studio and given the model my own data source using "Add your data (preview)" feature.
On Chat session in Chat playground page, the chat model can give a correct answer based on the data I gave. However, when I asked the same question to the model via API, the model can not use that data source.
I'd like to use a chat model that use my own data source via API. How do I fix this issue?
Here is what I have tried.
- Deploy a gpt-35-turbo model on Azure OpenAI Studio
- Add my own data using "Add your data (preview)" feature
- The model gives correct answer based on the data on Chat session view
- However, model behaves as it does not know the data when I ask the same question via API.
#Note: The openai-python library support for Azure OpenAI is in preview.
import os
import openai
openai.api_type = "azure"
openai.api_base = "https://openai-test-uksouth.openai.azure.com/"
openai.api_version = "2023-03-15-preview"
openai.api_key = "KEY"
response = openai.ChatCompletion.create(
engine="gpt35turbo",
messages = [
{"role":"system","content":"You are an AI assistant that helps people find information."},
{"role":"user","content":"Summarize `main.py`!"}
],
temperature=0,
max_tokens=800,
top_p=1,
frequency_penalty=0,
presence_penalty=0,
stop=None)
print(response)
The response is
{
"id": "chatcmpl-7dtf29DavpRsKGWygZIrJDwj0MDGn",
"object": "chat.completion",
"created": 1689743108,
"model": "gpt-35-turbo",
"choices": [
{
"index": 0,
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": "I'm sorry, I cannot summarize `main.py` without more information. `main.py` could refer to any Python file and could contain any number of functions or code. Please provide more context or information about the specific `main.py` file you are referring to."
}
}
],
"usage": {
"completion_tokens": 54,
"prompt_tokens": 32,
"total_tokens": 86
}
}


