Literal AI - 记录、评估、监控
Literal AI 是一个协作式可观测性、评估和分析平台,用于构建生产级 LLM 应用程序。
先决条件
确保您已安装 literalai
软件包
pip install literalai litellm
快速入门
import litellm
import os
os.environ["LITERAL_API_KEY"] = ""
os.environ['OPENAI_API_KEY']= ""
os.environ['LITERAL_BATCH_SIZE'] = "1" # You won't see logs appear until the batch is full and sent
litellm.success_callback = ["literalai"] # Log Input/Output to LiteralAI
litellm.failure_callback = ["literalai"] # Log Errors to LiteralAI
# openai call
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
]
)
多步追踪
此集成兼容 Literal AI SDK 的装饰器,可启用对话和代理追踪
import litellm
from literalai import LiteralClient
import os
os.environ["LITERAL_API_KEY"] = ""
os.environ['OPENAI_API_KEY']= ""
os.environ['LITERAL_BATCH_SIZE'] = "1" # You won't see logs appear until the batch is full and sent
litellm.input_callback = ["literalai"] # Support other Literal AI decorators and prompt templates
litellm.success_callback = ["literalai"] # Log Input/Output to LiteralAI
litellm.failure_callback = ["literalai"] # Log Errors to LiteralAI
literalai_client = LiteralClient()
@literalai_client.run
def my_agent(question: str):
# agent logic here
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": question}
],
metadata={"literalai_parent_id": literalai_client.get_current_step().id}
)
return response
my_agent("Hello world")
# Waiting to send all logs before exiting, not needed in a production server
literalai_client.flush()
了解有关 Literal AI 日志记录功能 的更多信息。
将生成绑定到其提示模板
此集成可直接与 Literal AI 上管理的提示配合使用。这意味着特定的 LLM 生成将绑定到其模板。
了解有关 Literal AI 上的 提示管理 的更多信息。
OpenAI 代理使用
如果您正在使用 Lite LLM 代理,您可以使用 Literal AI OpenAI 检测来记录您的调用。
from literalai import LiteralClient
from openai import OpenAI
client = OpenAI(
api_key="anything", # litellm proxy virtual key
base_url="http://0.0.0.0:4000" # litellm proxy base_url
)
literalai_client = LiteralClient(api_key="")
# Instrument the OpenAI client
literalai_client.instrument_openai()
settings = {
"model": "gpt-3.5-turbo", # model you want to send litellm proxy
"temperature": 0,
# ... more settings
}
response = client.chat.completions.create(
messages=[
{
"content": "You are a helpful bot, you always reply in Spanish",
"role": "system"
},
{
"content": message.content,
"role": "user"
}
],
**settings
)