跳到主要内容

Lakera AI

快速入门

1. 在 LiteLLM config.yaml 中定义护栏

guardrails 部分下定义护栏

litellm config.yaml
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY

guardrails:
- guardrail_name: "lakera-guard"
litellm_params:
guardrail: lakera_v2 # supported values: "aporia", "bedrock", "lakera"
mode: "during_call"
api_key: os.environ/LAKERA_API_KEY
api_base: os.environ/LAKERA_API_BASE
- guardrail_name: "lakera-pre-guard"
litellm_params:
guardrail: lakera_v2 # supported values: "aporia", "bedrock", "lakera"
mode: "pre_call"
api_key: os.environ/LAKERA_API_KEY
api_base: os.environ/LAKERA_API_BASE

mode 的支持值

  • pre_call 在 LLM 调用之前运行,作用于输入
  • post_call 在 LLM 调用之后运行,作用于输入与输出
  • during_call 在 LLM 调用期间运行,作用于输入pre_call 相同,但与 LLM 调用并行运行。响应直到护栏检查完成后才会返回

2. 启动 LiteLLM 网关

litellm --config config.yaml --detailed_debug

3. 测试请求

Langchain、OpenAI SDK 使用示例

预计此调用会失败,因为请求中的 ishaan@berri.ai 是 PII

Curl 请求
curl -i http://localhost:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-npnwjPQciVRok5yNZgKmFQ" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "hi my email is ishaan@berri.ai"}
],
"guardrails": ["lakera-guard"]
}'

失败时的预期响应

{
"error": {
"message": {
"error": "Violated content safety policy",
"lakera_ai_response": {
"model": "lakera-guard-1",
"results": [
{
"categories": {
"prompt_injection": true,
"jailbreak": false
},
"category_scores": {
"prompt_injection": 0.999,
"jailbreak": 0.0
},
"flagged": true,
"payload": {}
}
],
"dev_info": {
"git_revision": "cb163444",
"git_timestamp": "2024-08-19T16:00:28+02:00",
"version": "1.3.53"
}
}
},
"type": "None",
"param": "None",
"code": "400"
}
}