AI/ML API
AI/ML API 入门非常简单。请按照以下步骤设置您的集成
1. 获取您的 API 密钥
首先,您需要一个 API 密钥。您可以在此处获取
🔑 获取您的 API 密钥
2. 探索可用模型
正在寻找不同的模型?浏览支持模型的完整列表
📚 模型完整列表
3. 阅读文档
有关详细的设置说明和使用指南,请查阅官方文档
📖 AI/ML API 文档
4. 需要帮助?
如果您有任何疑问,请随时联系我们。我们很乐意提供帮助!🚀 Discord
使用方法
您可以在 aimlapi.com/models 选择 LLama、Qwen、Flux 和其他 200+ 个开源和闭源模型。例如
import litellm
response = litellm.completion(
    model="openai/meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo", # The model name must include prefix "openai" + the model name from ai/ml api
    api_key="", # your aiml api-key 
    api_base="https://api.aimlapi.com/v2",
    messages=[
        {
            "role": "user",
            "content": "Hey, how's it going?",
        }
    ],
)
流式传输
import litellm
response = litellm.completion(
    model="openai/Qwen/Qwen2-72B-Instruct",  # The model name must include prefix "openai" + the model name from ai/ml api
    api_key="",  # your aiml api-key 
    api_base="https://api.aimlapi.com/v2",
    messages=[
        {
            "role": "user",
            "content": "Hey, how's it going?",
        }
    ],
    stream=True,
)
for chunk in response:
    print(chunk)
异步补全
import asyncio
import litellm
async def main():
    response = await litellm.acompletion(
        model="openai/anthropic/claude-3-5-haiku",  # The model name must include prefix "openai" + the model name from ai/ml api
        api_key="",  # your aiml api-key
        api_base="https://api.aimlapi.com/v2",
        messages=[
            {
                "role": "user",
                "content": "Hey, how's it going?",
            }
        ],
    )
    print(response)
if __name__ == "__main__":
    asyncio.run(main())
异步流式传输
import asyncio
import traceback
import litellm
async def main():
    try:
        print("test acompletion + streaming")
        response = await litellm.acompletion(
            model="openai/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF", # The model name must include prefix "openai" + the model name from ai/ml api
            api_key="", # your aiml api-key
            api_base="https://api.aimlapi.com/v2",
            messages=[{"content": "Hey, how's it going?", "role": "user"}],
            stream=True,
        )
        print(f"response: {response}")
        async for chunk in response:
            print(chunk)
    except:
        print(f"error occurred: {traceback.format_exc()}")
        pass
if __name__ == "__main__":
    asyncio.run(main())
异步 Embedding
import asyncio
import litellm
async def main():
    response = await litellm.aembedding(
        model="openai/text-embedding-3-small", # The model name must include prefix "openai" + the model name from ai/ml api
        api_key="",  # your aiml api-key
        api_base="https://api.aimlapi.com/v1", # 👈 the URL has changed from v2 to v1
        input="Your text string",
    )
    print(response)
if __name__ == "__main__":
    asyncio.run(main())
异步图像生成
import asyncio
import litellm
async def main():
    response = await litellm.aimage_generation(
        model="openai/dall-e-3",  # The model name must include prefix "openai" + the model name from ai/ml api
        api_key="",  # your aiml api-key
        api_base="https://api.aimlapi.com/v1", # 👈 the URL has changed from v2 to v1
        prompt="A cute baby sea otter",
    )
    print(response)
if __name__ == "__main__":
    asyncio.run(main())