跳到主内容

超时

在路由器中设置的超时适用于整个调用过程,并且也会向下传递到 completion() 调用层面。

全局超时

from litellm import Router 

model_list = [{...}]

router = Router(model_list=model_list,
timeout=30) # raise timeout error if call takes > 30s

print(response)

自定义超时、流式超时 - 按模型设置

对于每个模型,您可以在 litellm_params 下设置 timeoutstream_timeout

from litellm import Router 
import asyncio

model_list = [{
"model_name": "gpt-3.5-turbo",
"litellm_params": {
"model": "azure/chatgpt-v-2",
"api_key": os.getenv("AZURE_API_KEY"),
"api_version": os.getenv("AZURE_API_VERSION"),
"api_base": os.getenv("AZURE_API_BASE"),
"timeout": 300 # sets a 5 minute timeout
"stream_timeout": 30 # sets a 30s timeout for streaming calls
}
}]

# init router
router = Router(model_list=model_list, routing_strategy="least-busy")
async def router_acompletion():
response = await router.acompletion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hey, how's it going?"}]
)
print(response)
return response

asyncio.run(router_acompletion())

设置动态超时 - 按请求设置

LiteLLM 支持为每个请求设置 timeout

示例用法

from litellm import Router 

model_list = [{...}]
router = Router(model_list=model_list)

response = router.completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "what color is red"}],
timeout=1
)

测试超时处理

要测试您的重试/回退逻辑是否能处理超时,您可以将 mock_timeout=True 用于测试。

目前仅支持 /chat/completions/completions 端点。如果其他端点需要此功能,请告诉我们

curl -L -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-1234' \
--data-raw '{
"model": "gemini/gemini-1.5-flash",
"messages": [
{"role": "user", "content": "hi my email is ishaan@berri.ai"}
],
"mock_timeout": true # 👈 KEY CHANGE
}'