Langsmith - 记录 LLM 输入/输出
适用于应用程序生命周期每个步骤的一体化开发者平台 https://smith.langchain.com/
先决条件
pip install litellm
快速入门
只需两行代码,即可使用 Langsmith 立即记录 跨所有提供商 的响应
- SDK
- LiteLLM Proxy
litellm.callbacks = ["langsmith"]
import litellm
import os
os.environ["LANGSMITH_API_KEY"] = ""
os.environ["LANGSMITH_PROJECT"] = "" # defaults to litellm-completion
os.environ["LANGSMITH_DEFAULT_RUN_NAME"] = "" # defaults to LLMRun
# LLM API Keys
os.environ['OPENAI_API_KEY']=""
# set langsmith as a callback, litellm will send the data to langsmith
litellm.callbacks = ["langsmith"]
# openai call
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
]
)
- 设置 config.yaml
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY
litellm_settings:
callbacks: ["langsmith"]
- 启动 LiteLLM Proxy
litellm --config /path/to/config.yaml
- 测试它!
curl -L -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-eWkpOhYaHiuIZV-29JDeTQ' \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Hey, how are you?"
}
],
"max_completion_tokens": 250
}'
高级
本地测试 - 控制批量大小
设置 Langsmith 一次处理的批量大小,默认值为 512.
在本地测试时设置 langsmith_batch_size=1
,以便快速查看日志。
- SDK
- LiteLLM Proxy
import litellm
import os
os.environ["LANGSMITH_API_KEY"] = ""
# LLM API Keys
os.environ['OPENAI_API_KEY']=""
# set langsmith as a callback, litellm will send the data to langsmith
litellm.callbacks = ["langsmith"]
litellm.langsmith_batch_size = 1 # 👈 KEY CHANGE
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
]
)
print(response)
- 设置 config.yaml
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY
litellm_settings:
langsmith_batch_size: 1
callbacks: ["langsmith"]
- 启动 LiteLLM Proxy
litellm --config /path/to/config.yaml
- 测试它!
curl -L -X POST 'http://0.0.0.0:4000/v1/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer sk-eWkpOhYaHiuIZV-29JDeTQ' \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Hey, how are you?"
}
],
"max_completion_tokens": 250
}'
设置 Langsmith 字段
import litellm
import os
os.environ["LANGSMITH_API_KEY"] = ""
# LLM API Keys
os.environ['OPENAI_API_KEY']=""
# set langsmith as a callback, litellm will send the data to langsmith
litellm.success_callback = ["langsmith"]
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
],
metadata={
"run_name": "litellmRUN", # langsmith run name
"project_name": "litellm-completion", # langsmith project name
"run_id": "497f6eca-6276-4993-bfeb-53cbbbba6f08", # langsmith run id
"parent_run_id": "f8faf8c1-9778-49a4-9004-628cdb0047e5", # langsmith run parent run id
"trace_id": "df570c03-5a03-4cea-8df0-c162d05127ac", # langsmith run trace id
"session_id": "1ffd059c-17ea-40a8-8aef-70fd0307db82", # langsmith run session id
"tags": ["model1", "prod-2"], # langsmith run tags
"metadata": { # langsmith run metadata
"key1": "value1"
},
"dotted_order": "20240429T004912090000Z497f6eca-6276-4993-bfeb-53cbbbba6f08"
}
)
print(response)
让 LiteLLM Proxy 使用自定义的 LANGSMITH_BASE_URL
如果您使用的是自定义的 LangSmith 实例,您可以设置 LANGSMITH_BASE_URL
环境变量指向您的实例。例如,您可以使用此配置让 LiteLLM Proxy 将日志记录到本地 LangSmith 实例
litellm_settings:
success_callback: ["langsmith"]
environment_variables:
LANGSMITH_BASE_URL: "https://:1984"
LANGSMITH_PROJECT: "litellm-proxy"
支持与创始人交流
- 安排演示 👋
- 社区 Discord 💭
- 我们的电话 📞 +1 (770) 8783-106 / +1 (412) 618-6238
- 我们的邮箱 ✉️ ishaan@berri.ai / krrish@berri.ai