Documentation Index
Fetch the complete documentation index at: https://arizeai-433a7140-mikeldking-12899-providers-and-secrets.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
LiteLLM allows developers to call all LLM APIs using the openAI format. LiteLLM Proxy is a proxy server to call 100+ LLMs in OpenAI format. Both are supported by this auto-instrumentation.
Any calls made to the following functions will be automatically captured by this integration:
-
completion()
-
acompletion()
-
completion_with_retries()
-
embedding()
-
aembedding()
-
image_generation()
-
aimage_generation()
Install
pip install openinference-instrumentation-litellm "litellm<1.82.7"
Setup
Use the register function to connect your application to Phoenix:
from phoenix.otel import register
# configure the Phoenix tracer
tracer_provider = register(
project_name="my-llm-app", # Default is 'default'
auto_instrument=True # Auto-instrument your app based on installed OI dependencies
)
Add any API keys needed by the models you are using with LiteLLM.
import os
os.environ["OPENAI_API_KEY"] = "PASTE_YOUR_API_KEY_HERE"
Run LiteLLM
You can now use LiteLLM as normal and calls will be traces in Phoenix.
import litellm
completion_response = litellm.completion(model="gpt-3.5-turbo",
messages=[{"content": "What's the capital of China?", "role": "user"}])
print(completion_response)
Observe
Traces should now be visible in Phoenix!
Resources