异常映射
LiteLLM 将所有提供商的异常映射到其 OpenAI 对应项。
所有异常都可以从 litellm 导入 - 例如 from litellm import BadRequestError
LiteLLM 异常
| 状态码 | 错误类型 | 继承自 | 描述 |
|---|---|---|---|
| 400 | BadRequestError | openai.BadRequestError | |
| 400 | UnsupportedParamsError | litellm.BadRequestError | 当传入不支持的参数时引发 |
| 400 | ContextWindowExceededError | litellm.BadRequestError | 用于上下文窗口超出错误消息的特殊错误类型 - 启用上下文窗口回退 |
| 400 | ContentPolicyViolationError | litellm.BadRequestError | 用于内容策略违规错误消息的特殊错误类型 - 启用内容策略回退 |
| 400 | ImageFetchError | litellm.BadRequestError | 在获取或处理图像时发生错误时引发 |
| 400 | InvalidRequestError | openai.BadRequestError | 已弃用的错误,请改用 BadRequestError |
| 401 | AuthenticationError | openai.AuthenticationError | |
| 403 | PermissionDeniedError | openai.PermissionDeniedError | |
| 404 | NotFoundError | openai.NotFoundError | 当传入无效模型时引发,例如 gpt-8 |
| 408 | Timeout | openai.APITimeoutError | 当发生超时时引发 |
| 422 | UnprocessableEntityError | openai.UnprocessableEntityError | |
| 429 | RateLimitError | openai.RateLimitError | |
| 500 | APIConnectionError | openai.APIConnectionError | 如果返回任何未映射的错误,我们将返回此错误 |
| 500 | APIError | openai.APIError | 通用的 500 状态码错误 |
| 503 | ServiceUnavailableError | openai.APIStatusError | 如果提供商返回服务不可用错误,则会引发此错误 |
| >=500 | InternalServerError | openai.InternalServerError | 如果返回任何未映射的 500 状态码错误,则会引发此错误 |
| N/A | APIResponseValidationError | openai.APIResponseValidationError | 如果使用规则,并且请求/响应未能通过规则,则会引发此错误 |
| N/A | BudgetExceededError | Exception | 当代理预算超出时引发 |
| N/A | JSONSchemaValidationError | litellm.APIResponseValidationError | 当响应不符合预期的 json 模式时引发 - 如果在传入 response_schema 参数时 enforce_validation=True |
| N/A | MockException | Exception | 内部异常,由 mock_completion 类引发。请勿直接使用 |
| N/A | OpenAIError | openai.OpenAIError | 已弃用的内部异常,继承自 openai.OpenAIError。 |
基本情况,我们返回 APIConnectionError
我们所有的异常都继承自 OpenAI 的异常类型,因此您对该异常的处理,应该可以直接在 LiteLLM 中使用。
在所有情况下,返回的异常都继承自原始 OpenAI 异常,但包含 3 个额外的属性
- status_code - 异常的 http 状态码
- message - 错误消息
- llm_provider - 引发异常的提供商
用法
import litellm
import openai
try:
response = litellm.completion(
model="gpt-4",
messages=[
{
"role": "user",
"content": "hello, write a 20 pageg essay"
}
],
timeout=0.01, # this will raise a timeout exception
)
except openai.APITimeoutError as e:
print("Passed: Raised correct exception. Got openai.APITimeoutError\nGood Job", e)
print(type(e))
pass
用法 - 捕获流式异常
import litellm
try:
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": "hello, write a 20 pg essay"
}
],
timeout=0.0001, # this will raise an exception
stream=True,
)
for chunk in response:
print(chunk)
except openai.APITimeoutError as e:
print("Passed: Raised correct exception. Got openai.APITimeoutError\nGood Job", e)
print(type(e))
pass
except Exception as e:
print(f"Did not raise error `openai.APITimeoutError`. Instead raised error type: {type(e)}, Error: {e}")
用法 - 是否应该重试异常?
import litellm
import openai
try:
response = litellm.completion(
model="gpt-4",
messages=[
{
"role": "user",
"content": "hello, write a 20 pageg essay"
}
],
timeout=0.01, # this will raise a timeout exception
)
except openai.APITimeoutError as e:
should_retry = litellm._should_retry(e.status_code)
print(f"should_retry: {should_retry}")
高级
访问提供商特定的错误详情
LiteLLM 异常包含一个 provider_specific_fields 属性,其中包含特定于每个提供商的附加错误信息。这对于 Azure OpenAI 尤其有用,它提供详细的内容过滤信息。
Azure OpenAI - 内容策略违规内部错误访问
当 Azure OpenAI 返回内容策略违规时,可以通过 innererror 字段访问详细的内容过滤结果
import litellm
from litellm.exceptions import ContentPolicyViolationError
try:
response = litellm.completion(
model="azure/gpt-4",
messages=[
{
"role": "user",
"content": "Some content that might violate policies"
}
]
)
except ContentPolicyViolationError as e:
# Access Azure-specific error details
if e.provider_specific_fields and "innererror" in e.provider_specific_fields:
innererror = e.provider_specific_fields["innererror"]
# Access content filter results
content_filter_result = innererror.get("content_filter_result", {})
print(f"Content filter code: {innererror.get('code')}")
print(f"Hate filtered: {content_filter_result.get('hate', {}).get('filtered')}")
print(f"Violence severity: {content_filter_result.get('violence', {}).get('severity')}")
print(f"Sexual content filtered: {content_filter_result.get('sexual', {}).get('filtered')}")
示例响应结构
调用 LiteLLM 代理时,内容策略违规将返回详细的过滤信息
{
"error": {
"message": "litellm.ContentPolicyViolationError: AzureException - The response was filtered due to the prompt triggering Azure OpenAI's content management policy...",
"type": null,
"param": null,
"code": "400",
"provider_specific_fields": {
"innererror": {
"code": "ResponsibleAIPolicyViolation",
"content_filter_result": {
"hate": {
"filtered": true,
"severity": "high"
},
"jailbreak": {
"filtered": false,
"detected": false
},
"self_harm": {
"filtered": false,
"severity": "safe"
},
"sexual": {
"filtered": false,
"severity": "safe"
},
"violence": {
"filtered": true,
"severity": "medium"
}
}
}
}
}
}
## Details
To see how it's implemented - [check out the code](https://github.com/BerriAI/litellm/blob/a42c197e5a6de56ea576c73715e6c7c6b19fa249/litellm/utils.py#L1217)
[Create an issue](https://github.com/BerriAI/litellm/issues/new) **or** [make a PR](https://github.com/BerriAI/litellm/pulls) if you want to improve the exception mapping.
**Note** For OpenAI and Azure we return the original exception (since they're of the OpenAI Error type). But we add the 'llm_provider' attribute to them. [See code](https://github.com/BerriAI/litellm/blob/a42c197e5a6de56ea576c73715e6c7c6b19fa249/litellm/utils.py#L1221)
## Custom mapping list
Base case - we return `litellm.APIConnectionError` exception (inherits from openai's APIConnectionError exception).
| custom_llm_provider | Timeout | ContextWindowExceededError | BadRequestError | NotFoundError | ContentPolicyViolationError | AuthenticationError | APIError | RateLimitError | ServiceUnavailableError | PermissionDeniedError | UnprocessableEntityError |
|----------------------------|---------|----------------------------|------------------|---------------|-----------------------------|---------------------|----------|----------------|-------------------------|-----------------------|-------------------------|
| openai | ✓ | ✓ | ✓ | | ✓ | ✓ | | | | | |
| watsonx | | | | | | | |✓| | | |
| text-completion-openai | ✓ | ✓ | ✓ | | ✓ | ✓ | | | | | |
| custom_openai | ✓ | ✓ | ✓ | | ✓ | ✓ | | | | | |
| openai_compatible_providers| ✓ | ✓ | ✓ | | ✓ | ✓ | | | | | |
| anthropic | ✓ | ✓ | ✓ | ✓ | | ✓ | | | ✓ | ✓ | |
| replicate | ✓ | ✓ | ✓ | ✓ | | ✓ | | ✓ | ✓ | | |
| bedrock | ✓ | ✓ | ✓ | ✓ | | ✓ | | ✓ | ✓ | ✓ | |
| sagemaker | | ✓ | ✓ | | | | | | | | |
| vertex_ai | ✓ | | ✓ | | | | ✓ | | | | ✓ |
| palm | ✓ | ✓ | | | | | ✓ | | | | |
| gemini | ✓ | ✓ | | | | | ✓ | | | | |
| cloudflare | | | ✓ | | | ✓ | | | | | |
| cohere | | ✓ | ✓ | | | ✓ | | | ✓ | | |
| cohere_chat | | ✓ | ✓ | | | ✓ | | | ✓ | | |
| huggingface | ✓ | ✓ | ✓ | | | ✓ | | ✓ | ✓ | | |
| ai21 | ✓ | ✓ | ✓ | ✓ | | ✓ | | ✓ | | | |
| nlp_cloud | ✓ | ✓ | ✓ | | | ✓ | ✓ | ✓ | ✓ | | |
| together_ai | ✓ | ✓ | ✓ | | | ✓ | | | | | |
| aleph_alpha | | | ✓ | | | ✓ | | | | | |
| ollama | ✓ | | ✓ | | | | | | ✓ | | |
| ollama_chat | ✓ | | ✓ | | | | | | ✓ | | |
| vllm | | | | | | ✓ | ✓ | | | | |
| azure | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | | ✓ | | |
- "✓" indicates that the specified `custom_llm_provider` can raise the corresponding exception.
- Empty cells indicate the lack of association or that the provider does not raise that particular exception type as indicated by the function.
> For a deeper understanding of these exceptions, you can check out [this](https://github.com/BerriAI/litellm/blob/d7e58d13bf9ba9edbab2ab2f096f3de7547f35fa/litellm/utils.py#L1544) implementation for additional insights.
The `ContextWindowExceededError` is a sub-class of `InvalidRequestError`. It was introduced to provide more granularity for exception-handling scenarios. Please refer to [this issue to learn more](https://github.com/BerriAI/litellm/issues/228).
Contributions to improve exception mapping are [welcome](https://github.com/BerriAI/litellm#contributing)