跳到主要内容

Azure OpenAI

概述

属性详情
描述Azure OpenAI 服务提供对 OpenAI 强大语言模型的 REST API 访问,包括 o1、o1-mini、GPT-4o、GPT-4o mini、GPT-4 Turbo with Vision、GPT-4、GPT-3.5-Turbo 和 Embeddings 模型系列。
LiteLLM 上的提供商路由azure/, azure/o_series/
支持的操作/chat/completions, /completions, /embeddings, /audio/speech, /audio/transcriptions, /fine_tuning, /batches, /files, /images
提供商文档链接Azure OpenAI ↗

API 密钥、参数

api_key、api_base、api_version 等可以直接传递给 litellm.completion - 请参见此处,或设置为 litellm.api_key 参数 - 请参见此处

import os
os.environ["AZURE_API_KEY"] = "" # "my-azure-api-key"
os.environ["AZURE_API_BASE"] = "" # "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "" # "2023-05-15"

# optional
os.environ["AZURE_AD_TOKEN"] = ""
os.environ["AZURE_API_TYPE"] = ""

使用方法 - LiteLLM Python SDK

Open In Colab

补全 - 使用 .env 变量

from litellm import completion

## set ENV variables
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""

# azure call
response = completion(
model = "azure/<your_deployment_name>",
messages = [{ "content": "Hello, how are you?","role": "user"}]
)

补全 - 使用 api_key, api_base, api_version

import litellm

# azure call
response = litellm.completion(
model = "azure/<your deployment name>", # model = azure/<your deployment name>
api_base = "", # azure api base
api_version = "", # azure api version
api_key = "", # azure api key
messages = [{"role": "user", "content": "good morning"}],
)

补全 - 使用 azure_ad_token, api_base, api_version

import litellm

# azure call
response = litellm.completion(
model = "azure/<your deployment name>", # model = azure/<your deployment name>
api_base = "", # azure api base
api_version = "", # azure api version
azure_ad_token="", # azure_ad_token
messages = [{"role": "user", "content": "good morning"}],
)

使用方法 - LiteLLM 代理服务器

以下是如何使用 LiteLLM 代理服务器调用 Azure OpenAI 模型

1. 在你的环境中保存密钥

export AZURE_API_KEY=""

2. 启动代理

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/chatgpt-v-2
api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
api_version: "2023-05-15"
api_key: os.environ/AZURE_API_KEY # The `os.environ/` prefix tells litellm to read this from the env.

3. 测试

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
'

Azure OpenAI 聊天补全模型

提示

我们支持所有 Azure 模型,只需在发送 litellm 请求时将 model=azure/<你的部署名称> 设置为前缀即可。

模型名称函数调用
o1-miniresponse = completion(model="azure/<你的部署名称>", messages=messages)
o1-previewresponse = completion(model="azure/<你的部署名称>", messages=messages)
gpt-4o-minicompletion('azure/<你的部署名称>', messages)
gpt-4ocompletion('azure/<你的部署名称>', messages)
gpt-4completion('azure/<你的部署名称>', messages)
gpt-4-0314completion('azure/<你的部署名称>', messages)
gpt-4-0613completion('azure/<你的部署名称>', messages)
gpt-4-32kcompletion('azure/<你的部署名称>', messages)
gpt-4-32k-0314completion('azure/<你的部署名称>', messages)
gpt-4-32k-0613completion('azure/<你的部署名称>', messages)
gpt-4-1106-previewcompletion('azure/<你的部署名称>', messages)
gpt-4-0125-previewcompletion('azure/<你的部署名称>', messages)
gpt-3.5-turbocompletion('azure/<你的部署名称>', messages)
gpt-3.5-turbo-0301completion('azure/<你的部署名称>', messages)
gpt-3.5-turbo-0613completion('azure/<你的部署名称>', messages)
gpt-3.5-turbo-16kcompletion('azure/<你的部署名称>', messages)
gpt-3.5-turbo-16k-0613completion('azure/<你的部署名称>', messages)

Azure OpenAI Vision 模型

模型名称函数调用
gpt-4-visioncompletion(model="azure/<你的部署名称>", messages=messages)
gpt-4ocompletion('azure/<你的部署名称>', messages)

使用方法

import os 
from litellm import completion

os.environ["AZURE_API_KEY"] = "your-api-key"

# azure call
response = completion(
model = "azure/<your deployment name>",
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "What’s in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
}
]
}
],
)

使用方法 - 包含 Azure Vision 增强功能

注意:Azure 要求将 base_url 设置为包含 /extensions

示例

base_url=https://gpt-4-vision-resource.openai.azure.com/openai/deployments/gpt-4-vision/extensions
# base_url="{azure_endpoint}/openai/deployments/{azure_deployment}/extensions"

使用方法

import os 
from litellm import completion

os.environ["AZURE_API_KEY"] = "your-api-key"

# azure call
response = completion(
model="azure/gpt-4-vision",
timeout=5,
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Whats in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://avatars.githubusercontent.com/u/29436595?v=4"
},
},
],
}
],
base_url="https://gpt-4-vision-resource.openai.azure.com/openai/deployments/gpt-4-vision/extensions",
api_key=os.getenv("AZURE_VISION_API_KEY"),
enhancements={"ocr": {"enabled": True}, "grounding": {"enabled": True}},
dataSources=[
{
"type": "AzureComputerVision",
"parameters": {
"endpoint": "https://gpt-4-vision-enhancement.cognitiveservices.azure.com/",
"key": os.environ["AZURE_VISION_ENHANCE_KEY"],
},
}
],
)

O-Series 模型

LiteLLM 支持 Azure OpenAI O-Series 模型。

LiteLLM 会将模型名称中包含 o1o3 的任何部署名称路由到 O-Series 转换逻辑。

要显式设置此项,请将 model 设置为 azure/o_series/<你的部署名称>

自动路由

import litellm

litellm.completion(model="azure/my-o3-deployment", messages=[{"role": "user", "content": "Hello, world!"}]) # 👈 Note: 'o3' in the deployment name

显式路由

import litellm

litellm.completion(model="azure/o_series/my-random-deployment-name", messages=[{"role": "user", "content": "Hello, world!"}]) # 👈 Note: 'o_series/' in the deployment name

Azure 音频模型

from litellm import completion
import os

os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""

response = completion(
model="azure/azure-openai-4o-audio",
messages=[
{
"role": "user",
"content": "I want to try out speech to speech"
}
],
modalities=["text","audio"],
audio={"voice": "alloy", "format": "wav"}
)

print(response)

Azure Instruct 模型

使用 model="azure_text/<你的部署>"

模型名称函数调用
gpt-3.5-turbo-instructresponse = completion(model="azure_text/<你的部署名称>", messages=messages)
gpt-3.5-turbo-instruct-0914response = completion(model="azure_text/<你的部署名称>", messages=messages)
import litellm

## set ENV variables
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""

response = litellm.completion(
model="azure_text/<your-deployment-name",
messages=[{"role": "user", "content": "What is the weather like in Boston?"}]
)

print(response)

Azure 文本转语音 (tts)

LiteLLM 代理

 - model_name: azure/tts-1
litellm_params:
model: azure/tts-1
api_base: "os.environ/AZURE_API_BASE_TTS"
api_key: "os.environ/AZURE_API_KEY_TTS"
api_version: "os.environ/AZURE_API_VERSION"

LiteLLM SDK

from litellm import completion

## set ENV variables
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""

# azure call
speech_file_path = Path(__file__).parent / "speech.mp3"
response = speech(
model="azure/<your-deployment-name",
voice="alloy",
input="the quick brown fox jumped over the lazy dogs",
)
response.stream_to_file(speech_file_path)

认证

Entra ID - 使用 azure_ad_token

这是一个关于如何使用 Azure Active Directory Tokens - Microsoft Entra ID 进行 litellm.completion() 调用的演练。

步骤 1 - 下载 Azure CLI 安装说明:https://learn.microsoft.com/zh-cn/cli/azure/install-azure-cli

brew update && brew install azure-cli

步骤 2 - 使用 az 登录

az login --output table

步骤 3 - 生成 Azure AD 令牌

az account get-access-token --resource https://cognitiveservices.azure.com

在此步骤中,你应该会看到生成了一个 accessToken

{
"accessToken": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IjlHbW55RlBraGMzaE91UjIybXZTdmduTG83WSIsImtpZCI6IjlHbW55RlBraGMzaE91UjIybXZTdmduTG83WSJ9",
"expiresOn": "2023-11-14 15:50:46.000000",
"expires_on": 1700005846,
"subscription": "db38de1f-4bb3..",
"tenant": "bdfd79b3-8401-47..",
"tokenType": "Bearer"
}

步骤 4 - 使用 Azure AD 令牌进行 litellm.completion 调用

azure_ad_token 设置为步骤 3 中的 accessToken,或设置 os.environ['AZURE_AD_TOKEN']

response = litellm.completion(
model = "azure/<your deployment name>", # model = azure/<your deployment name>
api_base = "", # azure api base
api_version = "", # azure api version
azure_ad_token="", # your accessToken from step 3
messages = [{"role": "user", "content": "good morning"}],
)

Entra ID - 使用 tenant_id, client_id, client_secret

以下是在 LiteLLM 代理 config.yaml 中设置 tenant_id, client_id, client_secret 的示例

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/chatgpt-v-2
api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
api_version: "2023-05-15"
tenant_id: os.environ/AZURE_TENANT_ID
client_id: os.environ/AZURE_CLIENT_ID
client_secret: os.environ/AZURE_CLIENT_SECRET

测试

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
'

使用 tenant_id, client_id, client_secret 与 LiteLLM 代理服务器的示例视频

Entra ID - 使用 client_id, username, password

以下是在 LiteLLM 代理 config.yaml 中设置 client_id, azure_username, azure_password 的示例

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: azure/chatgpt-v-2
api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
api_version: "2023-05-15"
client_id: os.environ/AZURE_CLIENT_ID
azure_username: os.environ/AZURE_USERNAME
azure_password: os.environ/AZURE_PASSWORD

测试

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
'

Azure AD Token Refresh - DefaultAzureCredential

如果你想在请求中使用 Azure DefaultAzureCredential 进行身份验证,请使用此方法

from litellm import completion
from azure.identity import DefaultAzureCredential, get_bearer_token_provider

token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")


response = completion(
model = "azure/<your deployment name>", # model = azure/<your deployment name>
api_base = "", # azure api base
api_version = "", # azure api version
azure_ad_token_provider=token_provider
messages = [{"role": "user", "content": "good morning"}],
)

Azure Batches API

属性详情
描述Azure OpenAI Batches API
LiteLLM 上的 custom_llm_providerazure/
支持的操作/v1/batches, /v1/files
Azure OpenAI Batches APIAzure OpenAI Batches API ↗
成本跟踪、日志记录支持✅ LiteLLM 将记录和跟踪 Batch API 请求的成本

快速开始

只需将 azure 环境变量添加到你的环境中。

export AZURE_API_KEY=""
export AZURE_API_BASE=""

1. 上传文件

from openai import OpenAI

# Initialize the client
client = OpenAI(
base_url="http://localhost:4000",
api_key="your-api-key"
)

batch_input_file = client.files.create(
file=open("mydata.jsonl", "rb"),
purpose="batch",
extra_body={"custom_llm_provider": "azure"}
)
file_id = batch_input_file.id

示例文件格式

{"custom_id": "task-0", "method": "POST", "url": "/chat/completions", "body": {"model": "REPLACE-WITH-MODEL-DEPLOYMENT-NAME", "messages": [{"role": "system", "content": "You are an AI assistant that helps people find information."}, {"role": "user", "content": "When was Microsoft founded?"}]}}
{"custom_id": "task-1", "method": "POST", "url": "/chat/completions", "body": {"model": "REPLACE-WITH-MODEL-DEPLOYMENT-NAME", "messages": [{"role": "system", "content": "You are an AI assistant that helps people find information."}, {"role": "user", "content": "When was the first XBOX released?"}]}}
{"custom_id": "task-2", "method": "POST", "url": "/chat/completions", "body": {"model": "REPLACE-WITH-MODEL-DEPLOYMENT-NAME", "messages": [{"role": "system", "content": "You are an AI assistant that helps people find information."}, {"role": "user", "content": "What is Altair Basic?"}]}}

2. 创建批量请求

batch = client.batches.create( # re use client from above
input_file_id=file_id,
endpoint="/v1/chat/completions",
completion_window="24h",
metadata={"description": "My batch job"},
extra_body={"custom_llm_provider": "azure"}
)

3. 检索批量

retrieved_batch = client.batches.retrieve(
batch.id,
extra_body={"custom_llm_provider": "azure"}
)

4. 取消批量

cancelled_batch = client.batches.cancel(
batch.id,
extra_body={"custom_llm_provider": "azure"}
)

5. 列出批量

client.batches.list(extra_body={"custom_llm_provider": "azure"})

健康检查 Azure Batch 模型

[BETA]负载均衡多个 Azure 部署

在你的 config.yaml 中,设置 enable_loadbalancing_on_batch_endpoints: true

model_list:
- model_name: "batch-gpt-4o-mini"
litellm_params:
model: "azure/gpt-4o-mini"
api_key: os.environ/AZURE_API_KEY
api_base: os.environ/AZURE_API_BASE
model_info:
mode: batch

litellm_settings:
enable_loadbalancing_on_batch_endpoints: true # 👈 KEY CHANGE

注意:这适用于 {PROXY_BASE_URL}/v1/files{PROXY_BASE_URL}/v1/batches。注意:响应采用 OpenAI 格式。

  1. 上传文件

只需在你的 .jsonl 文件中设置 model: batch-gpt-4o-mini

curl http://localhost:4000/v1/files \
-H "Authorization: Bearer sk-1234" \
-F purpose="batch" \
-F file="@mydata.jsonl"

示例文件

注意:model 应该是你的 Azure 部署名称。

{"custom_id": "task-0", "method": "POST", "url": "/chat/completions", "body": {"model": "batch-gpt-4o-mini", "messages": [{"role": "system", "content": "You are an AI assistant that helps people find information."}, {"role": "user", "content": "When was Microsoft founded?"}]}}
{"custom_id": "task-1", "method": "POST", "url": "/chat/completions", "body": {"model": "batch-gpt-4o-mini", "messages": [{"role": "system", "content": "You are an AI assistant that helps people find information."}, {"role": "user", "content": "When was the first XBOX released?"}]}}
{"custom_id": "task-2", "method": "POST", "url": "/chat/completions", "body": {"model": "batch-gpt-4o-mini", "messages": [{"role": "system", "content": "You are an AI assistant that helps people find information."}, {"role": "user", "content": "What is Altair Basic?"}]}}

预期响应 (OpenAI 兼容)

{"id":"file-f0be81f654454113a922da60acb0eea6",...}
  1. 创建批量
curl http://0.0.0.0:4000/v1/batches \
-H "Authorization: Bearer $LITELLM_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input_file_id": "file-f0be81f654454113a922da60acb0eea6",
"endpoint": "/v1/chat/completions",
"completion_window": "24h",
"model: "batch-gpt-4o-mini"
}'

预期响应

{"id":"batch_94e43f0a-d805-477d-adf9-bbb9c50910ed",...}
  1. 检索批量
curl http://0.0.0.0:4000/v1/batches/batch_94e43f0a-d805-477d-adf9-bbb9c50910ed \
-H "Authorization: Bearer $LITELLM_API_KEY" \
-H "Content-Type: application/json" \

预期响应

{"id":"batch_94e43f0a-d805-477d-adf9-bbb9c50910ed",...}
  1. 列出批量
curl http://0.0.0.0:4000/v1/batches?limit=2 \
-H "Authorization: Bearer $LITELLM_API_KEY" \
-H "Content-Type: application/json"

预期响应

{"data":[{"id":"batch_R3V...}

Azure Responses API

属性详情
描述Azure OpenAI Responses API
LiteLLM 上的 custom_llm_providerazure/
支持的操作/v1/responses
Azure OpenAI Responses APIAzure OpenAI Responses API ↗
成本跟踪、日志记录支持✅ LiteLLM 将记录和跟踪 Responses API 请求的成本
支持的 OpenAI 参数✅ 支持所有 OpenAI 参数,请参见此处

使用方法

创建模型响应

非流式

Azure Responses API
import litellm

# Non-streaming response
response = litellm.responses(
model="azure/o1-pro",
input="Tell me a three sentence bedtime story about a unicorn.",
max_output_tokens=100,
api_key=os.getenv("AZURE_RESPONSES_OPENAI_API_KEY"),
api_base="https://litellm8397336933.openai.azure.com/",
api_version="2023-03-15-preview",
)

print(response)

流式

Azure Responses API
import litellm

# Streaming response
response = litellm.responses(
model="azure/o1-pro",
input="Tell me a three sentence bedtime story about a unicorn.",
stream=True,
api_key=os.getenv("AZURE_RESPONSES_OPENAI_API_KEY"),
api_base="https://litellm8397336933.openai.azure.com/",
api_version="2023-03-15-preview",
)

for event in response:
print(event)

高级

Azure API 负载均衡

如果你尝试在多个 Azure/OpenAI 部署之间进行负载均衡,请使用此方法。

Router 通过选择低于速率限制且使用 token 量最少的部署来防止请求失败。

在生产环境中,Router 连接到 Redis Cache 以跟踪多个部署的使用情况。

快速开始

pip install litellm
from litellm import Router

model_list = [{ # list of model deployments
"model_name": "gpt-3.5-turbo", # openai model name
"litellm_params": { # params for litellm completion/embedding call
"model": "azure/chatgpt-v-2",
"api_key": os.getenv("AZURE_API_KEY"),
"api_version": os.getenv("AZURE_API_VERSION"),
"api_base": os.getenv("AZURE_API_BASE")
},
"tpm": 240000,
"rpm": 1800
}, {
"model_name": "gpt-3.5-turbo", # openai model name
"litellm_params": { # params for litellm completion/embedding call
"model": "azure/chatgpt-functioncalling",
"api_key": os.getenv("AZURE_API_KEY"),
"api_version": os.getenv("AZURE_API_VERSION"),
"api_base": os.getenv("AZURE_API_BASE")
},
"tpm": 240000,
"rpm": 1800
}, {
"model_name": "gpt-3.5-turbo", # openai model name
"litellm_params": { # params for litellm completion/embedding call
"model": "gpt-3.5-turbo",
"api_key": os.getenv("OPENAI_API_KEY"),
},
"tpm": 1000000,
"rpm": 9000
}]

router = Router(model_list=model_list)

# openai.chat.completions.create replacement
response = router.completion(model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hey, how's it going?"}]

print(response)

Redis 队列

router = Router(model_list=model_list, 
redis_host=os.getenv("REDIS_HOST"),
redis_password=os.getenv("REDIS_PASSWORD"),
redis_port=os.getenv("REDIS_PORT"))

print(response)

工具调用 / 函数调用

请参见此处关于使用 litellm 进行并行函数调用的详细演练

# set Azure env variables
import os
import litellm
import json

os.environ['AZURE_API_KEY'] = "" # litellm reads AZURE_API_KEY from .env and sends the request
os.environ['AZURE_API_BASE'] = "https://openai-gpt-4-test-v-1.openai.azure.com/"
os.environ['AZURE_API_VERSION'] = "2023-07-01-preview"

tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
}
]

response = litellm.completion(
model="azure/chatgpt-functioncalling", # model = azure/<your-azure-deployment-name>
messages=[{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}],
tools=tools,
tool_choice="auto", # auto is default, but we'll be explicit
)
print("\nLLM Response1:\n", response)
response_message = response.choices[0].message
tool_calls = response.choices[0].message.tool_calls
print("\nTool Choice:\n", tool_calls)
### Azure OpenAI 模型支出跟踪 (PROXY)

为成本跟踪 Azure 图像生成调用设置基础模型

图像生成

model_list: 
- model_name: dall-e-3
litellm_params:
model: azure/dall-e-3-test
api_version: 2023-06-01-preview
api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
api_key: os.environ/AZURE_API_KEY
base_model: dall-e-3 # 👈 set dall-e-3 as base model
model_info:
mode: image_generation

聊天补全 / Embeddings

问题:当使用 azure/gpt-4-1106-preview 时,Azure 在响应中返回 gpt-4。这导致成本跟踪不准确。

解决方案 ✅:在你的配置中设置 base_model,以便 litellm 使用正确的模型计算 Azure 成本。

此处获取基础模型名称

包含 base_model 的示例配置

model_list:
- model_name: azure-gpt-3.5
litellm_params:
model: azure/chatgpt-v-2
api_base: os.environ/AZURE_API_BASE
api_key: os.environ/AZURE_API_KEY
api_version: "2023-07-01-preview"
model_info:
base_model: azure/gpt-4-1106-preview