跳到主要内容

客户端 LLM 凭证

传递用户 LLM API 密钥、回退

允许您的终端用户传递他们的模型列表、api base、OpenAI API 密钥(任何 LiteLLM 支持的提供商)来发起请求

注意 这与虚拟密钥无关。这是指当您希望传递用户实际的 LLM API 密钥时的情况。

信息

您可以传递一个 litellm.RouterConfig 作为 user_config,此处查看所有支持的参数 https://github.com/BerriAI/litellm/blob/main/litellm/types/router.py

步骤 1:定义用户模型列表与配置

import os

user_config = {
'model_list': [
{
'model_name': 'user-azure-instance',
'litellm_params': {
'model': 'azure/chatgpt-v-2',
'api_key': os.getenv('AZURE_API_KEY'),
'api_version': os.getenv('AZURE_API_VERSION'),
'api_base': os.getenv('AZURE_API_BASE'),
'timeout': 10,
},
'tpm': 240000,
'rpm': 1800,
},
{
'model_name': 'user-openai-instance',
'litellm_params': {
'model': 'gpt-3.5-turbo',
'api_key': os.getenv('OPENAI_API_KEY'),
'timeout': 10,
},
'tpm': 240000,
'rpm': 1800,
},
],
'num_retries': 2,
'allowed_fails': 3,
'fallbacks': [
{
'user-azure-instance': ['user-openai-instance']
}
]
}


步骤 2:在 extra_body 中发送 user_config

import openai
client = openai.OpenAI(
api_key="sk-1234",
base_url="http://0.0.0.0:4000"
)

# send request to `user-azure-instance`
response = client.chat.completions.create(model="user-azure-instance", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={
"user_config": user_config
}
) # 👈 User config

print(response)

步骤 1:定义用户模型列表与配置

const os = require('os');

const userConfig = {
model_list: [
{
model_name: 'user-azure-instance',
litellm_params: {
model: 'azure/chatgpt-v-2',
api_key: process.env.AZURE_API_KEY,
api_version: process.env.AZURE_API_VERSION,
api_base: process.env.AZURE_API_BASE,
timeout: 10,
},
tpm: 240000,
rpm: 1800,
},
{
model_name: 'user-openai-instance',
litellm_params: {
model: 'gpt-3.5-turbo',
api_key: process.env.OPENAI_API_KEY,
timeout: 10,
},
tpm: 240000,
rpm: 1800,
},
],
num_retries: 2,
allowed_fails: 3,
fallbacks: [
{
'user-azure-instance': ['user-openai-instance']
}
]
};

步骤 2:将 user_config 作为参数发送到 openai.chat.completions.create

const { OpenAI } = require('openai');

const openai = new OpenAI({
apiKey: "sk-1234",
baseURL: "http://0.0.0.0:4000"
});

async function main() {
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-3.5-turbo',
user_config: userConfig // # 👈 User config
});
}

main();

传递用户 LLM API 密钥 / API Base

允许您的用户传入他们的 OpenAI API 密钥/API base(任何 LiteLLM 支持的提供商)来发起请求

操作方法如下

1. 为提供商启用可配置的客户端认证凭证

model_list:
- model_name: "fireworks_ai/*"
litellm_params:
model: "fireworks_ai/*"
configurable_clientside_auth_params: ["api_base"]
# OR
configurable_clientside_auth_params: [{"api_base": "^https://litellm.*direct\.fireworks\.ai/v1$"}] # 👈 regex

指定您希望用户能够配置的任何/所有认证参数

  • api_base (✅ 支持正则表达式)
  • api_key
  • base_url

(查看提供商文档以获取提供商特定的认证参数 - 例如 vertex_project

2. 测试!

import openai
client = openai.OpenAI(
api_key="sk-1234",
base_url="http://0.0.0.0:4000"
)

# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={"api_key": "my-bad-key", "api_base": "https://litellm-dev.direct.fireworks.ai/v1"}) # 👈 clientside credentials

print(response)

更多示例

通过 OpenAI 客户端的 extra_body 参数传入 litellm_params(例如 api_key, api_base 等)。

import openai
client = openai.OpenAI(
api_key="sk-1234",
base_url="http://0.0.0.0:4000"
)

# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(model="gpt-3.5-turbo", messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={
"api_key": "my-azure-key",
"api_base": "my-azure-base",
"api_version": "my-azure-version"
}) # 👈 User Key

print(response)

对于 JS,OpenAI 客户端接受在 create(..) 正文中正常传递参数。

const { OpenAI } = require('openai');

const openai = new OpenAI({
apiKey: "sk-1234",
baseURL: "http://0.0.0.0:4000"
});

async function main() {
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-3.5-turbo',
api_key: "my-bad-key" // 👈 User Key
});
}

main();

传递提供商特定参数(例如区域、项目 ID 等)

指定在客户端向 Vertex AI 发送请求时使用的区域、项目 ID 等。

代理请求正文中传递的任何值,LiteLLM 将对照映射的 openai / litellm 认证参数进行检查。

未映射的参数将被视为提供商特定参数,并将在 LLM API 的请求正文中传递给提供商。

import openai
client = openai.OpenAI(
api_key="anything",
base_url="http://0.0.0.0:4000"
)

# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={ # pass any additional litellm_params here
vertex_ai_location: "us-east1"
}
)

print(response)