Documentation Index
Fetch the complete documentation index at: https://arizeai-433a7140-mikeldking-12899-providers-and-secrets.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Open Agent Spec (Agent Spec) is a portable language for defining agentic systems.
It defines building blocks for standalone agents and structured agentic workflows as well as common ways of composing them into multi-agent systems.
Agent Spec Tracing is an extension of Agent Spec that standardizes how agent and flow executions emit traces.
Agent Spec Tracing enables:
- Runtime adapters to emit consistent traces across different frameworks.
- Consumers (observability backends, UIs, developer tooling) to ingest one standardized format regardless of the producer.
With Agent Spec, tracing instrumentation is implemented using the OpenTelemetry instrumentor known as AgentSpecInstrumentor.
This callback handles the creation of spans and transmits them to the Phoenix collector.
Install
pip install openinference-instrumentation-agentspec pyagentspec[langgraph]
This installs the LangGraph adapter for Agent Spec, which allows developers to run Agent Spec workflows using LangGraph as a backend.
You can find out the list of available adapters and how to install them in the Agent Spec installation instructions.
Setup
Use the register function to connect your application to Phoenix:
from phoenix.otel import register
# configure the Phoenix tracer
tracer_provider = register(
project_name="my-llm-app", # Default is 'default'
auto_instrument=True # Auto-instrument your app based on installed OI dependencies
)
Create the Agent Spec agent
from pyagentspec.agent import Agent
from pyagentspec.llms import OpenAiConfig
agent = Agent(
name="assistant",
description="A general purpose agent without tools",
llm_config=OpenAiConfig(name="openai-gpt-5-mini", model_id="gpt-5-mini"),
system_prompt="You are a helpful assistant. Help the user answering politely.",
)
Run the agent using LangGraph
# Transform the Agent Spec agent into a LangGraph's executable component
from pyagentspec.adapters.langgraph import AgentSpecLoader
langgraph_agent = AgentSpecLoader().load_component(agent)
# Instrument the agent's execution
from openinference.instrumentation.agentspec import AgentSpecInstrumentor
from phoenix.otel import register
tracer_provider = register(batch=True, project_name="hello-world-app")
AgentSpecInstrumentor().instrument(tracer_provider=tracer_provider)
# Run the agent's execution loop
while True:
user_input = input("USER >>> ")
if user_input.lower() in ["exit", "quit"]:
break
response = langgraph_agent.invoke(
input={"messages": [{"role": "user", "content": user_input}]},
config={"configurable": {"thread_id": "1"}},
)
print("AGENT >>>", response['messages'][-1].content.strip())
Observe
With tracing now configured, all calls to your Agent Spec agent will be streamed to Phoenix for enhanced observability and evaluation.
Resources