跳到主要内容

/responses[Beta]

LiteLLM 提供符合 OpenAI 的 /responses API 规范的 BETA 端点

功能支持备注
成本跟踪适用于所有支持的模型
日志记录适用于所有集成
终端用户跟踪
流式传输
回退在支持的模型之间工作
负载均衡在支持的模型之间工作
支持的操作创建响应、获取响应、删除响应
支持的 LiteLLM 版本1.63.8+
支持的 LLM 提供商所有 LiteLLM 支持的提供商openai, anthropic, bedrock, vertex_ai, gemini, azure, azure_ai

用法

LiteLLM Python SDK

非流式传输

OpenAI 非流式传输响应
import litellm

# Non-streaming response
response = litellm.responses(
model="openai/o1-pro",
input="Tell me a three sentence bedtime story about a unicorn.",
max_output_tokens=100
)

print(response)

流式传输

OpenAI 流式传输响应
import litellm

# Streaming response
response = litellm.responses(
model="openai/o1-pro",
input="Tell me a three sentence bedtime story about a unicorn.",
stream=True
)

for event in response:
print(event)

GET 响应

按 ID 获取响应
import litellm

# First, create a response
response = litellm.responses(
model="openai/o1-pro",
input="Tell me a three sentence bedtime story about a unicorn.",
max_output_tokens=100
)

# Get the response ID
response_id = response.id

# Retrieve the response by ID
retrieved_response = litellm.get_responses(
response_id=response_id
)

print(retrieved_response)

# For async usage
# retrieved_response = await litellm.aget_responses(response_id=response_id)

DELETE 响应

按 ID 删除响应
import litellm

# First, create a response
response = litellm.responses(
model="openai/o1-pro",
input="Tell me a three sentence bedtime story about a unicorn.",
max_output_tokens=100
)

# Get the response ID
response_id = response.id

# Delete the response by ID
delete_response = litellm.delete_responses(
response_id=response_id
)

print(delete_response)

# For async usage
# delete_response = await litellm.adelete_responses(response_id=response_id)

使用 OpenAI SDK 的 LiteLLM 代理

首先,设置并启动您的 LiteLLM 代理服务器。

启动 LiteLLM 代理服务器
litellm --config /path/to/config.yaml

# RUNNING on http://0.0.0.0:4000

首先,将其添加到您的 litellm 代理 config.yaml 文件中

OpenAI 代理配置
model_list:
- model_name: openai/o1-pro
litellm_params:
model: openai/o1-pro
api_key: os.environ/OPENAI_API_KEY

非流式传输

OpenAI 代理非流式传输响应
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-api-key" # Your proxy API key
)

# Non-streaming response
response = client.responses.create(
model="openai/o1-pro",
input="Tell me a three sentence bedtime story about a unicorn."
)

print(response)

流式传输

OpenAI 代理流式传输响应
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-api-key" # Your proxy API key
)

# Streaming response
response = client.responses.create(
model="openai/o1-pro",
input="Tell me a three sentence bedtime story about a unicorn.",
stream=True
)

for event in response:
print(event)

GET 响应

使用 OpenAI SDK 按 ID 获取响应
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-api-key" # Your proxy API key
)

# First, create a response
response = client.responses.create(
model="openai/o1-pro",
input="Tell me a three sentence bedtime story about a unicorn."
)

# Get the response ID
response_id = response.id

# Retrieve the response by ID
retrieved_response = client.responses.retrieve(response_id)

print(retrieved_response)

DELETE 响应

使用 OpenAI SDK 按 ID 删除响应
from openai import OpenAI

# Initialize client with your proxy URL
client = OpenAI(
base_url="http://localhost:4000", # Your proxy URL
api_key="your-api-key" # Your proxy API key
)

# First, create a response
response = client.responses.create(
model="openai/o1-pro",
input="Tell me a three sentence bedtime story about a unicorn."
)

# Get the response ID
response_id = response.id

# Delete the response by ID
delete_response = client.responses.delete(response_id)

print(delete_response)

支持的 Responses API 参数

提供商支持的参数
openai支持所有 Responses API 参数
azure支持所有 Responses API 参数
anthropic在此查看支持的参数
bedrock在此查看支持的参数
gemini在此查看支持的参数
vertex_ai在此查看支持的参数
azure_ai在此查看支持的参数
所有其他 llm api 提供商在此查看支持的参数

具有会话连续性的负载均衡。

当对同一模型的多个部署(例如,多个 Azure OpenAI 端点)使用 Responses API 时,LiteLLM 提供会话连续性。这确保使用 previous_response_id 的后续请求被路由到生成原始响应的同一部署。

用法示例

具有会话连续性的 Python SDK
import litellm

# Set up router with multiple deployments of the same model
router = litellm.Router(
model_list=[
{
"model_name": "azure-gpt4-turbo",
"litellm_params": {
"model": "azure/gpt-4-turbo",
"api_key": "your-api-key-1",
"api_version": "2024-06-01",
"api_base": "https://endpoint1.openai.azure.com",
},
},
{
"model_name": "azure-gpt4-turbo",
"litellm_params": {
"model": "azure/gpt-4-turbo",
"api_key": "your-api-key-2",
"api_version": "2024-06-01",
"api_base": "https://endpoint2.openai.azure.com",
},
},
],
optional_pre_call_checks=["responses_api_deployment_check"],
)

# Initial request
response = await router.aresponses(
model="azure-gpt4-turbo",
input="Hello, who are you?",
truncation="auto",
)

# Store the response ID
response_id = response.id

# Follow-up request - will be automatically routed to the same deployment
follow_up = await router.aresponses(
model="azure-gpt4-turbo",
input="Tell me more about yourself",
truncation="auto",
previous_response_id=response_id # This ensures routing to the same deployment
)

会话管理 - 非 OpenAI 模型

LiteLLM 代理支持非 OpenAI 模型的会话管理。这允许您在 LiteLLM 代理中存储和获取对话历史记录(状态)。

用法

  1. 启用在数据库中存储请求/响应内容

在 proxy config.yaml 中设置 store_prompts_in_spend_logs: true。启用此项后,LiteLLM 将在数据库中存储请求和响应内容。

general_settings:
store_prompts_in_spend_logs: true
  1. 发出请求 1,不带 previous_response_id(新会话)

通过不指定先前的响应 ID 来开始新的对话。

curl http://localhost:4000/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-1234" \
-d '{
"model": "anthropic/claude-3-5-sonnet-latest",
"input": "who is Michael Jordan"
}'

响应

{
"id":"resp_123abc",
"model":"claude-3-5-sonnet-20241022",
"output":[{
"type":"message",
"content":[{
"type":"output_text",
"text":"Michael Jordan is widely considered one of the greatest basketball players of all time. He played for the Chicago Bulls (1984-1993, 1995-1998) and Washington Wizards (2001-2003), winning 6 NBA Championships with the Bulls."
}]
}]
}
  1. 发出请求 2,带 previous_response_id(同一会话)

通过引用先前的响应 ID 来继续对话,以维持对话上下文。

curl http://localhost:4000/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-1234" \
-d '{
"model": "anthropic/claude-3-5-sonnet-latest",
"input": "can you tell me more about him",
"previous_response_id": "resp_123abc"
}'

响应

{
"id":"resp_456def",
"model":"claude-3-5-sonnet-20241022",
"output":[{
"type":"message",
"content":[{
"type":"output_text",
"text":"Michael Jordan was born February 17, 1963. He attended University of North Carolina before being drafted 3rd overall by the Bulls in 1984. Beyond basketball, he built the Air Jordan brand with Nike and later became owner of the Charlotte Hornets."
}]
}]
}
  1. 发出请求 3,不带 previous_response_id(新会话)

开始一个全新的对话,不引用先前的上下文,以演示上下文在会话之间是如何不保留的。

curl http://localhost:4000/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-1234" \
-d '{
"model": "anthropic/claude-3-5-sonnet-latest",
"input": "can you tell me more about him"
}'

响应

{
"id":"resp_789ghi",
"model":"claude-3-5-sonnet-20241022",
"output":[{
"type":"message",
"content":[{
"type":"output_text",
"text":"I don't see who you're referring to in our conversation. Could you let me know which person you'd like to learn more about?"
}]
}]
}