跳至主要内容

修改 / 拒绝传入请求

  • 在代理上进行 llm api 调用前修改数据
  • 在进行 llm api 调用 / 在返回响应之前拒绝数据
  • 强制所有 openai 端点调用包含 'user' 参数
提示

了解回调钩子? 请查看我们的 回调管理指南,以了解 async_pre_call_hook 等代理特定钩子与 async_log_success_event 等通用日志钩子之间的区别。

我应该使用哪个钩子?

钩子用例运行时间
async_pre_call_hook在发送到模型之前修改传入请求在进行 LLM API 调用之前
async_moderation_hook并行于 LLM API 调用进行输入检查与 LLM API 调用并行
async_post_call_success_hook修改传出响应(非流式)在成功的 LLM API 调用之后,针对非流式响应
async_post_call_failure_hook转换发送给客户端的错误响应在失败的 LLM API 调用之后
async_post_call_streaming_hook修改传出响应(流式)在成功的 LLM API 调用之后,针对流式响应
async_post_call_response_headers_hook注入自定义 HTTP 响应头在 LLM API 调用之后(成功和失败)

请参阅我们的 并行请求速率限制器 的完整示例

快速入门

  1. 在您的自定义处理程序中添加一个新的 async_pre_call_hook 函数

此函数在进行 litellm 完成调用之前被调用,并允许您修改进入 litellm 调用的数据 查看代码

from litellm.integrations.custom_logger import CustomLogger
import litellm
from litellm.proxy.proxy_server import UserAPIKeyAuth, DualCache
from litellm.types.utils import ModelResponseStream
from typing import Any, AsyncGenerator, Optional, Literal

# This file includes the custom callbacks for LiteLLM Proxy
# Once defined, these can be passed in proxy_config.yaml
class MyCustomHandler(CustomLogger): # https://docs.litellm.com.cn/docs/observability/custom_callback#callback-class
# Class variables or attributes
def __init__(self):
pass

#### CALL HOOKS - proxy only ####

async def async_pre_call_hook(self, user_api_key_dict: UserAPIKeyAuth, cache: DualCache, data: dict, call_type: Literal[
"completion",
"text_completion",
"embeddings",
"image_generation",
"moderation",
"audio_transcription",
]):
data["model"] = "my-new-model"
return data

async def async_post_call_failure_hook(
self,
request_data: dict,
original_exception: Exception,
user_api_key_dict: UserAPIKeyAuth,
traceback_str: Optional[str] = None,
) -> Optional[HTTPException]:
"""
Transform error responses sent to clients.

Return an HTTPException to replace the original error with a user-friendly message.
Return None to use the original exception.

Example:
if isinstance(original_exception, litellm.ContextWindowExceededError):
return HTTPException(
status_code=400,
detail="Your prompt is too long. Please reduce the length and try again."
)
return None # Use original exception
"""
pass

async def async_post_call_success_hook(
self,
data: dict,
user_api_key_dict: UserAPIKeyAuth,
response,
):
pass

async def async_moderation_hook( # call made in parallel to llm api call
self,
data: dict,
user_api_key_dict: UserAPIKeyAuth,
call_type: Literal["completion", "embeddings", "image_generation", "moderation", "audio_transcription"],
):
pass

async def async_post_call_streaming_hook(
self,
user_api_key_dict: UserAPIKeyAuth,
response: str,
):
pass

async def async_post_call_streaming_iterator_hook(
self,
user_api_key_dict: UserAPIKeyAuth,
response: Any,
request_data: dict,
) -> AsyncGenerator[ModelResponseStream, None]:
"""
Passes the entire stream to the guardrail

This is useful for plugins that need to see the entire stream.
"""
async for item in response:
yield item

async def async_post_call_response_headers_hook(
self,
data: dict,
user_api_key_dict: UserAPIKeyAuth,
response: Any,
request_headers: Optional[Dict[str, str]] = None,
) -> Optional[Dict[str, str]]:
"""
Inject custom headers into HTTP response (runs for both success and failure).
"""
return {"x-custom-header": "custom-value"}

proxy_handler_instance = MyCustomHandler()
  1. 将此文件添加到您的代理配置中
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo

litellm_settings:
callbacks: custom_callbacks.proxy_handler_instance # sets litellm.callbacks = [proxy_handler_instance]
  1. 启动服务器 + 测试请求
$ litellm /path/to/config.yaml
curl --location 'http://0.0.0.0:4000/chat/completions' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "good morning good sir"
}
],
"user": "ishaan-app",
"temperature": 0.2
}'

[BETA] NEW async_moderation_hook

并行于实际 LLM API 调用运行审核检查。

在您的自定义处理程序中添加一个新的 async_moderation_hook 函数

  • 目前仅支持 /chat/completion 调用。
  • 此函数与实际 LLM API 调用并行运行。
  • 如果您的 async_moderation_hook 引发异常,我们将将其返回给用户。
信息

我们可能需要在未来更新函数模式,以支持多个端点(例如,接受 call_type)。请在使用此功能时记住这一点

请参阅我们的 Llama Guard 内容审核钩子 的完整示例

from litellm.integrations.custom_logger import CustomLogger
import litellm
from fastapi import HTTPException

# This file includes the custom callbacks for LiteLLM Proxy
# Once defined, these can be passed in proxy_config.yaml
class MyCustomHandler(CustomLogger): # https://docs.litellm.com.cn/docs/observability/custom_callback#callback-class
# Class variables or attributes
def __init__(self):
pass

#### ASYNC ####

async def async_log_pre_api_call(self, model, messages, kwargs):
pass

async def async_log_success_event(self, kwargs, response_obj, start_time, end_time):
pass

async def async_log_failure_event(self, kwargs, response_obj, start_time, end_time):
pass

#### CALL HOOKS - proxy only ####

async def async_pre_call_hook(self, user_api_key_dict: UserAPIKeyAuth, cache: DualCache, data: dict, call_type: Literal["completion", "embeddings"]):
data["model"] = "my-new-model"
return data

async def async_moderation_hook( ### 👈 KEY CHANGE ###
self,
data: dict,
):
messages = data["messages"]
print(messages)
if messages[0]["content"] == "hello world":
raise HTTPException(
status_code=400, detail={"error": "Violated content safety policy"}
)

proxy_handler_instance = MyCustomHandler()
  1. 将此文件添加到您的代理配置中
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo

litellm_settings:
callbacks: custom_callbacks.proxy_handler_instance # sets litellm.callbacks = [proxy_handler_instance]
  1. 启动服务器 + 测试请求
$ litellm /path/to/config.yaml
curl --location 'http://0.0.0.0:4000/chat/completions' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Hello world"
}
],
}'

高级 - 强制 'user' 参数

enforce_user_param 设置为 true,以要求所有对 openai 端点的调用都包含 'user' 参数。

查看代码

general_settings:
enforce_user_param: True

结果

高级 - 将拒绝的消息作为响应返回

对于聊天补全和文本补全调用,您可以将拒绝的消息作为用户响应返回。

通过返回一个字符串来执行此操作。LiteLLM 会根据端点和是否流式/非流式处理返回正确格式的响应。

对于非聊天/文本补全端点,此响应将作为 400 状态码异常返回。

1. 创建自定义处理程序

from litellm.integrations.custom_logger import CustomLogger
import litellm
from litellm.utils import get_formatted_prompt

# This file includes the custom callbacks for LiteLLM Proxy
# Once defined, these can be passed in proxy_config.yaml
class MyCustomHandler(CustomLogger):
def __init__(self):
pass

#### CALL HOOKS - proxy only ####

async def async_pre_call_hook(self, user_api_key_dict: UserAPIKeyAuth, cache: DualCache, data: dict, call_type: Literal[
"completion",
"text_completion",
"embeddings",
"image_generation",
"moderation",
"audio_transcription",
]) -> Optional[dict, str, Exception]:
formatted_prompt = get_formatted_prompt(data=data, call_type=call_type)

if "Hello world" in formatted_prompt:
return "This is an invalid response"

return data

proxy_handler_instance = MyCustomHandler()

2. 更新 config.yaml

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo

litellm_settings:
callbacks: custom_callbacks.proxy_handler_instance # sets litellm.callbacks = [proxy_handler_instance]

3. 测试它!

$ litellm /path/to/config.yaml
curl --location 'http://0.0.0.0:4000/chat/completions' \
--data ' {
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Hello world"
}
],
}'

预期响应

{
"id": "chatcmpl-d00bbede-2d90-4618-bf7b-11a1c23cf360",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "This is an invalid response.", # 👈 REJECTED RESPONSE
"role": "assistant"
}
}
],
"created": 1716234198,
"model": null,
"object": "chat.completion",
"system_fingerprint": null,
"usage": {}
}

高级 - 转换错误响应

使用 async_post_call_failure_hook 将技术 API 错误转换为用户友好的消息。返回一个 HTTPException 以替换原始错误,或 None 以使用原始异常。

from litellm.integrations.custom_logger import CustomLogger
from fastapi import HTTPException
from typing import Optional
import litellm

class MyErrorTransformer(CustomLogger):
async def async_post_call_failure_hook(
self,
request_data: dict,
original_exception: Exception,
user_api_key_dict: UserAPIKeyAuth,
traceback_str: Optional[str] = None,
) -> Optional[HTTPException]:
if isinstance(original_exception, litellm.ContextWindowExceededError):
return HTTPException(
status_code=400,
detail="Your prompt is too long. Please reduce the length and try again."
)
if isinstance(original_exception, litellm.RateLimitError):
return HTTPException(
status_code=429,
detail="Rate limit exceeded. Please try again in a moment."
)
return None # Use original exception

proxy_handler_instance = MyErrorTransformer()

结果: 客户端收到 “您的提示太长了...” 而不是 “ContextWindowExceededError: Prompt exceeds context window”

高级 - 注入自定义 HTTP 响应头

使用 async_post_call_response_headers_hook 将自定义 HTTP 头注入到响应中。此钩子在 LLM API 调用成功和失败时运行。

from litellm.integrations.custom_logger import CustomLogger
from litellm.proxy.proxy_server import UserAPIKeyAuth
from typing import Any, Dict, Optional

class CustomHeaderLogger(CustomLogger):
def __init__(self):
super().__init__()

async def async_post_call_response_headers_hook(
self,
data: dict,
user_api_key_dict: UserAPIKeyAuth,
response: Any,
request_headers: Optional[Dict[str, str]] = None,
) -> Optional[Dict[str, str]]:
"""
Inject custom headers into all responses (success and failure).
"""
return {"x-custom-header": "custom-value"}

proxy_handler_instance = CustomHeaderLogger()