跳到主要内容

缓存

注意

对于 OpenAI/Anthropic 提示缓存,请访问此处

缓存 LLM 响应。LiteLLM 的缓存系统存储并重用 LLM 响应,以节省成本并降低延迟。当您两次发出相同的请求时,将返回缓存的响应,而不是再次调用 LLM API。

支持的缓存类型

  • 内存缓存
  • 磁盘缓存
  • Redis 缓存
  • Qdrant 语义缓存
  • Redis 语义缓存
  • S3 Bucket 缓存

快速入门

通过在 config.yaml 中添加 cache 键可以启用缓存

步骤 1:将 cache 添加到 config.yaml

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
- model_name: text-embedding-ada-002
litellm_params:
model: text-embedding-ada-002

litellm_settings:
set_verbose: True
cache: True # set cache responses to True, litellm defaults to using a redis cache

[可选]步骤 1.5:添加 Redis 命名空间、默认 TTL

命名空间

如果您想为您的键创建一个文件夹,可以设置一个命名空间,像这样

litellm_settings:
cache: true
cache_params: # set cache params for redis
type: redis
namespace: "litellm.caching.caching"

并且键将按如下方式存储

litellm.caching.caching:<hash>

Redis 集群

model_list:
- model_name: "*"
litellm_params:
model: "*"


litellm_settings:
cache: True
cache_params:
type: redis
redis_startup_nodes: [{"host": "127.0.0.1", "port": "7001"}]

Redis Sentinel

model_list:
- model_name: "*"
litellm_params:
model: "*"


litellm_settings:
cache: true
cache_params:
type: "redis"
service_name: "mymaster"
sentinel_nodes: [["localhost", 26379]]
sentinel_password: "password" # [OPTIONAL]

TTL

litellm_settings:
cache: true
cache_params: # set cache params for redis
type: redis
ttl: 600 # will be cached on redis for 600s
# default_in_memory_ttl: Optional[float], default is None. time in seconds.
# default_in_redis_ttl: Optional[float], default is None. time in seconds.

SSL

只需在 .env 文件中设置 REDIS_SSL="True",LiteLLM 就会识别它。

REDIS_SSL="True"

为了快速测试,您也可以使用 REDIS_URL,例如

REDIS_URL="rediss://.."

但我们建议在生产环境中使用 REDIS_URL。我们注意到使用 REDIS_URL 与使用 redis_host、port 等方式在性能上存在差异。

步骤 2:将 Redis 凭据添加到 .env

在您的操作系统环境中设置 REDIS_URLREDIS_HOST,以启用缓存。

REDIS_URL = ""        # REDIS_URL='redis://username:password@hostname:port/database'
## OR ##
REDIS_HOST = "" # REDIS_HOST='redis-18841.c274.us-east-1-3.ec2.cloud.redislabs.com'
REDIS_PORT = "" # REDIS_PORT='18841'
REDIS_PASSWORD = "" # REDIS_PASSWORD='liteLlmIsAmazing'

附加 kwargs
您可以通过将变量 + 值存储在您的操作系统环境中来传递任何附加的 redis.Redis 参数,像这样

REDIS_<redis-kwarg-name> = ""

查看如何从环境中读取

步骤 3:使用配置文件运行代理

$ litellm --config /path/to/config.yaml

用法

基本

发送两次相同的请求

curl http://0.0.0.0:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "write a poem about litellm!"}],
"temperature": 0.7
}'

curl http://0.0.0.0:4000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "write a poem about litellm!"}],
"temperature": 0.7
}'

动态缓存控制

参数类型描述
ttl可选(int)将按用户定义的时间(秒)缓存响应
s-maxage可选(int)只接受在用户定义的时间范围(秒)内的缓存响应
no-cache可选(bool)不会将响应存储在缓存中。
no-store可选(bool)不会缓存响应
namespace可选(str)将按用户定义的命名空间缓存响应

每个缓存参数都可以按请求进行控制。以下是每个参数的示例

ttl

设置缓存响应的时长(秒)。

from openai import OpenAI

client = OpenAI(
api_key="your-api-key",
base_url="http://0.0.0.0:4000"
)

chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Hello"}],
model="gpt-3.5-turbo",
extra_body={
"cache": {
"ttl": 300 # Cache response for 5 minutes
}
}
)

s-maxage

只接受在指定时长(秒)内的缓存响应。

from openai import OpenAI

client = OpenAI(
api_key="your-api-key",
base_url="http://0.0.0.0:4000"
)

chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Hello"}],
model="gpt-3.5-turbo",
extra_body={
"cache": {
"s-maxage": 600 # Only use cache if less than 10 minutes old
}
}
)

no-cache

强制获取新的响应,绕过缓存。

from openai import OpenAI

client = OpenAI(
api_key="your-api-key",
base_url="http://0.0.0.0:4000"
)

chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Hello"}],
model="gpt-3.5-turbo",
extra_body={
"cache": {
"no-cache": True # Skip cache check, get fresh response
}
}
)

no-store

不会将响应存储在缓存中。

from openai import OpenAI

client = OpenAI(
api_key="your-api-key",
base_url="http://0.0.0.0:4000"
)

chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Hello"}],
model="gpt-3.5-turbo",
extra_body={
"cache": {
"no-store": True # Don't cache this response
}
}
)

namespace

将响应存储在特定的缓存命名空间下。

from openai import OpenAI

client = OpenAI(
api_key="your-api-key",
base_url="http://0.0.0.0:4000"
)

chat_completion = client.chat.completions.create(
messages=[{"role": "user", "content": "Hello"}],
model="gpt-3.5-turbo",
extra_body={
"cache": {
"namespace": "my-custom-namespace" # Store in custom namespace
}
}
)

为代理设置缓存,但不实际调用 LLM API

如果您只想启用速率限制、跨多个实例进行负载均衡等功能,请使用此项。

supported_call_types: [] 设置为空以禁用实际 API 调用的缓存。

litellm_settings:
cache: True
cache_params:
type: redis
supported_call_types: []

调试缓存 - `/cache/ping`

LiteLLM 代理暴露了一个 `/cache/ping` 端点来测试缓存是否正常工作

用法

curl --location 'http://0.0.0.0:4000/cache/ping'  -H "Authorization: Bearer sk-1234"

预期响应 - 当缓存健康时

{
"status": "healthy",
"cache_type": "redis",
"ping_response": true,
"set_cache_response": "success",
"litellm_cache_params": {
"supported_call_types": "['completion', 'acompletion', 'embedding', 'aembedding', 'atranscription', 'transcription']",
"type": "redis",
"namespace": "None"
},
"redis_cache_params": {
"redis_client": "Redis<ConnectionPool<Connection<host=redis-16337.c322.us-east-1-2.ec2.cloud.redislabs.com,port=16337,db=0>>>",
"redis_kwargs": "{'url': 'redis://:******@redis-16337.c322.us-east-1-2.ec2.cloud.redislabs.com:16337'}",
"async_redis_conn_pool": "BlockingConnectionPool<Connection<host=redis-16337.c322.us-east-1-2.ec2.cloud.redislabs.com,port=16337,db=0>>",
"redis_version": "7.2.0"
}
}

高级

控制哪些调用类型(`/chat/completion`、`/embeddings` 等)启用缓存

默认情况下,所有调用类型都启用了缓存。您可以通过在 cache_params 中设置 supported_call_types 来控制哪些调用类型启用缓存

缓存将仅对 supported_call_types 中指定的调用类型启用

litellm_settings:
cache: True
cache_params:
type: redis
supported_call_types: ["acompletion", "atext_completion", "aembedding", "atranscription"]
# /chat/completions, /completions, /embeddings, /audio/transcriptions

在 config.yaml 中设置缓存参数

model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: gpt-3.5-turbo
- model_name: text-embedding-ada-002
litellm_params:
model: text-embedding-ada-002

litellm_settings:
set_verbose: True
cache: True # set cache responses to True, litellm defaults to using a redis cache
cache_params: # cache_params are optional
type: "redis" # The type of cache to initialize. Can be "local" or "redis". Defaults to "local".
host: "localhost" # The host address for the Redis cache. Required if type is "redis".
port: 6379 # The port number for the Redis cache. Required if type is "redis".
password: "your_password" # The password for the Redis cache. Required if type is "redis".

# Optional configurations
supported_call_types: ["acompletion", "atext_completion", "aembedding", "atranscription"]
# /chat/completions, /completions, /embeddings, /audio/transcriptions

删除缓存键 - `/cache/delete`

要删除缓存键,请向 `/cache/delete` 发送请求,并在请求中包含您要删除的 keys

示例

curl -X POST "http://0.0.0.0:4000/cache/delete" \
-H "Authorization: Bearer sk-1234" \
-d '{"keys": ["586bf3f3c1bf5aecb55bd9996494d3bbc69eb58397163add6d49537762a7548d", "key2"]}'
# {"status":"success"}

从响应中查看缓存键

您可以在响应头中查看 cache_key,当缓存命中时,cache_key 将作为 x-litellm-cache-key 响应头发送

curl -i --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"model": "gpt-3.5-turbo",
"user": "ishan",
"messages": [
{
"role": "user",
"content": "what is litellm"
}
],
}'

来自 LiteLLM 代理的响应

date: Thu, 04 Apr 2024 17:37:21 GMT
content-type: application/json
x-litellm-cache-key: 586bf3f3c1bf5aecb55bd9996494d3bbc69eb58397163add6d49537762a7548d

{
"id": "chatcmpl-9ALJTzsBlXR9zTxPvzfFFtFbFtG6T",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "I'm sorr.."
"role": "assistant"
}
}
],
"created": 1712252235,
}

默认关闭缓存 - 仅选择启用

  1. 将缓存的 mode 设置为 default_off
model_list:
- model_name: fake-openai-endpoint
litellm_params:
model: openai/fake
api_key: fake-key
api_base: https://exampleopenaiendpoint-production.up.railway.app/

# default off mode
litellm_settings:
set_verbose: True
cache: True
cache_params:
mode: default_off # 👈 Key change cache is default_off
  1. 当缓存默认关闭时选择启用缓存
import os
from openai import OpenAI

client = OpenAI(api_key=<litellm-api-key>, base_url="http://0.0.0.0:4000")

chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
model="gpt-3.5-turbo",
extra_body = { # OpenAI python accepts extra args in extra_body
"cache": {"use-cache": True}
}
)

开启 batch_redis_requests

它做什么? 当发起请求时

  • 检查以 litellm:<hashed_api_key>:<call_type>: 开头的键是否存在于内存中,如果不存在 - 获取该键的最近 100 个缓存请求并存储起来

  • 新的请求将以 litellm:.. 作为命名空间进行存储

为什么? 减少 Redis GET 请求的数量。这在生产环境负载测试中将延迟提高了 46%。

用法

litellm_settings:
cache: true
cache_params:
type: redis
... # remaining redis args (host, port, etc.)
callbacks: ["batch_redis_requests"] # 👈 KEY CHANGE!

查看代码

config.yaml 中支持的 cache_params

cache_params:
# ttl
ttl: Optional[float]
default_in_memory_ttl: Optional[float]
default_in_redis_ttl: Optional[float]

# Type of cache (options: "local", "redis", "s3")
type: s3

# List of litellm call types to cache for
# Options: "completion", "acompletion", "embedding", "aembedding"
supported_call_types: ["acompletion", "atext_completion", "aembedding", "atranscription"]
# /chat/completions, /completions, /embeddings, /audio/transcriptions

# Redis cache parameters
host: localhost # Redis server hostname or IP address
port: "6379" # Redis server port (as a string)
password: secret_password # Redis server password
namespace: Optional[str] = None,


# S3 cache parameters
s3_bucket_name: your_s3_bucket_name # Name of the S3 bucket
s3_region_name: us-west-2 # AWS region of the S3 bucket
s3_api_version: 2006-03-01 # AWS S3 API version
s3_use_ssl: true # Use SSL for S3 connections (options: true, false)
s3_verify: true # SSL certificate verification for S3 connections (options: true, false)
s3_endpoint_url: https://s3.amazonaws.com # S3 endpoint URL
s3_aws_access_key_id: your_access_key # AWS Access Key ID for S3
s3_aws_secret_access_key: your_secret_key # AWS Secret Access Key for S3
s3_aws_session_token: your_session_token # AWS Session Token for temporary credentials

高级 - 用户 API 密钥缓存 TTL

配置内存缓存存储密钥对象的时间(防止数据库请求)

general_settings:
user_api_key_cache_ttl: <your-number> #time in seconds

默认情况下,此值设置为 60 秒。