OpenTelemetry Quick Start
Monitor custom AI agents and multi-framework agentic applications with Fiddler using OpenTelemetry's native instrumentation.
What You'll Learn
In this guide, you'll learn how to:
Set up OpenTelemetry tracing for custom agent frameworks
Configure Fiddler as your OTLP endpoint with proper authentication
Map agent attributes to Fiddler's semantic conventions
Create instrumented LLM and tool spans with required attributes
Verify traces in the Fiddler dashboard
Time to complete: ~10-15 minutes
Prerequisites
Before you begin, ensure you have:
Fiddler Account: An active account with a GenAI application created
Python 3.10+
OpenTelemetry Packages:
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
LLM Provider (for examples): OpenAI API key or similar
Fiddler Access Token: Get your token from Settings > Credentials
Create Fiddler Application
Log in to your Fiddler instance and navigate to GenAI Apps
Select "Add Application" to create a new application
Copy your Application ID - This must be a valid UUID4 format (e.g.,
550e8400-e29b-41d4-a716-446655440000)Get your Access Token from Settings > Credentials
Important: Keep your Application ID and Access Token secure. You'll need both for the next steps.
Configure Environment Variables
Set up your environment to connect to Fiddler's OTLP endpoint:
export OTEL_EXPORTER_OTLP_ENDPOINT="https://your-instance.fiddler.ai"
export OTEL_EXPORTER_OTLP_HEADERS="authorization=Bearer <YOUR_ACCESS_TOKEN>,fiddler-application-id=<YOUR_APPLICATION_UUID>"
export OTEL_RESOURCE_ATTRIBUTES="application.id=<YOUR_APPLICATION_UUID>"Environment Variable Breakdown:
OTEL_EXPORTER_OTLP_ENDPOINT
Your Fiddler instance URL
https://org.fiddler.ai
OTEL_EXPORTER_OTLP_HEADERS
Authentication and app ID headers
authorization=Bearer sk-...,fiddler-application-id=550e8400...
OTEL_RESOURCE_ATTRIBUTES
Resource-level application identifier
application.id=550e8400-e29b-41d4-a716-446655440000
Python Configuration (alternative to environment variables):
import os
os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'https://your-instance.fiddler.ai'
os.environ['OTEL_EXPORTER_OTLP_HEADERS'] = 'authorization=Bearer <TOKEN>,fiddler-application-id=<UUID>'
os.environ['OTEL_RESOURCE_ATTRIBUTES'] = 'application.id=<UUID>'Tip: Store credentials in a .env file and use python-dotenv for local development:
from dotenv import load_dotenv
load_dotenv() # Loads variables from .env fileInitialize OpenTelemetry
Set up OpenTelemetry with Fiddler's OTLP exporter:
import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Initialize tracer provider
trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)
# Configure OTLP exporter for Fiddler
otlp_endpoint = os.getenv('OTEL_EXPORTER_OTLP_ENDPOINT') + '/v1/traces'
otlp_exporter = OTLPSpanExporter(endpoint=otlp_endpoint)
# Add batch span processor
otlp_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(otlp_processor)
print(f"✅ OpenTelemetry configured with endpoint: {otlp_endpoint}")What This Does:
TracerProvider: Manages trace generation
OTLPSpanExporter: Exports spans to Fiddler via OTLP protocol
BatchSpanProcessor: Batches spans for efficient network transmission
Instrument Your Agent
Create instrumented spans for your agent's operations. Fiddler requires specific attributes to properly categorize and visualize your agent traces.
Required Fiddler Attributes
Resource Level (set via environment variable):
application.id- UUID4 of your Fiddler application
Trace Level (required in all spans):
gen_ai.agent.name- Name of your AI agentgen_ai.agent.id- Unique identifier for the agent
Span Level (required for each span):
fiddler.span.type- Type of operation:"chain","tool","llm", or"other"
Example: Simplified Travel Agent
import json
from openai import OpenAI
client = OpenAI()
AGENT_NAME = "travel_agent"
AGENT_ID = "travel_agent_v1"
# Define tools
def book_hotel_tool(city: str, date: str):
"""Book a hotel in the specified city."""
with tracer.start_as_current_span("book_hotel") as span:
# Required attributes
span.set_attribute("fiddler.span.type", "tool")
span.set_attribute("gen_ai.agent.name", AGENT_NAME)
span.set_attribute("gen_ai.agent.id", AGENT_ID)
# Tool-specific attributes
span.set_attribute("gen_ai.tool.name", "book_hotel")
tool_input = {"city": city, "date": date}
span.set_attribute("gen_ai.tool.input", json.dumps(tool_input))
# Execute tool
result = {"status": "confirmed", "hotel": f"Grand Hotel {city}", "confirmation": "HTL123"}
span.set_attribute("gen_ai.tool.output", json.dumps(result))
return result
def book_flight_tool(source: str, destination: str, date: str):
"""Book a flight between two cities."""
with tracer.start_as_current_span("book_flight") as span:
# Required attributes
span.set_attribute("fiddler.span.type", "tool")
span.set_attribute("gen_ai.agent.name", AGENT_NAME)
span.set_attribute("gen_ai.agent.id", AGENT_ID)
# Tool-specific attributes
span.set_attribute("gen_ai.tool.name", "book_flight")
tool_input = {"source": source, "destination": destination, "date": date}
span.set_attribute("gen_ai.tool.input", json.dumps(tool_input))
# Execute tool
result = {"status": "confirmed", "flight": "FL456", "departure": "10:00 AM"}
span.set_attribute("gen_ai.tool.output", json.dumps(result))
return result
# Agent implementation
def travel_agent(user_request: str):
"""Main travel agent function."""
with tracer.start_as_current_span("travel_agent_chain") as root_span:
# Root span type
root_span.set_attribute("fiddler.span.type", "chain")
root_span.set_attribute("gen_ai.agent.name", AGENT_NAME)
root_span.set_attribute("gen_ai.agent.id", AGENT_ID)
# Call LLM to understand request
with tracer.start_as_current_span("llm_call") as llm_span:
# Required attributes
llm_span.set_attribute("fiddler.span.type", "llm")
llm_span.set_attribute("gen_ai.agent.name", AGENT_NAME)
llm_span.set_attribute("gen_ai.agent.id", AGENT_ID)
# LLM-specific attributes
llm_span.set_attribute("gen_ai.request.model", "gpt-4o-mini")
llm_span.set_attribute("gen_ai.system", "openai")
llm_span.set_attribute("gen_ai.llm.input.user", user_request)
llm_span.set_attribute(
"gen_ai.llm.input.system",
"You are a travel agent. Parse user requests and call appropriate tools."
)
# Call OpenAI
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a travel agent. Parse user requests and call appropriate tools."},
{"role": "user", "content": user_request}
],
tools=[
{
"type": "function",
"function": {
"name": "book_hotel",
"description": "Book a hotel in a city for a specific date",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string"},
"date": {"type": "string"}
},
"required": ["city", "date"]
}
}
},
{
"type": "function",
"function": {
"name": "book_flight",
"description": "Book a flight between two cities",
"parameters": {
"type": "object",
"properties": {
"source": {"type": "string"},
"destination": {"type": "string"},
"date": {"type": "string"}
},
"required": ["source", "destination", "date"]
}
}
}
]
)
# Set token usage
llm_span.set_attribute("gen_ai.usage.input_tokens", response.usage.prompt_tokens)
llm_span.set_attribute("gen_ai.usage.output_tokens", response.usage.completion_tokens)
llm_span.set_attribute("gen_ai.usage.total_tokens", response.usage.total_tokens)
# Process tool calls
tool_results = []
if response.choices[0].message.tool_calls:
for tool_call in response.choices[0].message.tool_calls:
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
if tool_name == "book_hotel":
result = book_hotel_tool(**tool_args)
tool_results.append(result)
elif tool_name == "book_flight":
result = book_flight_tool(**tool_args)
tool_results.append(result)
llm_span.set_attribute("gen_ai.llm.output",
f"Called tools and received: {tool_results}")
return {"status": "success", "bookings": tool_results}
# Run the agent
result = travel_agent("Book a hotel in Paris for tomorrow and a flight from London to Paris")
print(f"Agent result: {result}")Key Implementation Details:
Chain Spans: Use
fiddler.span.type = "chain"for high-level workflowsLLM Spans: Include model, system prompt, user input, output, and token usage
Tool Spans: Include tool name, input JSON, and output JSON
Nested Spans: Create parent-child relationships to show execution flow
Verify Monitoring
Run your instrumented code using the example above
Wait 1-2 minutes for traces to appear in Fiddler
Navigate to GenAI Apps in your Fiddler instance
Verify application status changes to Active
View traces to see your agent spans, hierarchy, and attributes
Success Criteria:
✅ Application shows as Active in GenAI Apps ✅ Traces appear with correct agent name ✅ Span hierarchy shows chain → LLM → tools relationship ✅ All required attributes are present (agent name, agent ID, span type) ✅ LLM token usage is tracked ✅ Tool inputs and outputs are captured
Verification Tip: Check the trace timeline view to see the execution flow of your agent, including which tools were called and how long each operation took.
Attribute Reference
Required Attributes
Resource Level:
application.id
string
UUID4 of your Fiddler application
"550e8400-e29b-41d4-a716-446655440000"
Trace Level (all spans):
gen_ai.agent.name
string
Name of the AI agent
"travel_agent"
gen_ai.agent.id
string
Unique identifier for the agent
"travel_agent_v1"
Span Level:
fiddler.span.type
string
Type of operation
"chain", "tool", "llm", "other"
Optional Attributes
Conversation Tracking:
gen_ai.conversation.id
string
Session/conversation identifier
"conv_123"
LLM Span Attributes:
gen_ai.request.model
string
Model name
"gpt-4o-mini", "claude-3-opus"
gen_ai.system
string
LLM provider
"openai", "anthropic"
gen_ai.llm.input.system
string
System prompt
"You are a helpful assistant"
gen_ai.llm.input.user
string
User input
"What's the weather?"
gen_ai.llm.output
string
LLM response
"The weather is sunny"
gen_ai.usage.input_tokens
int
Input tokens used
42
gen_ai.usage.output_tokens
int
Output tokens used
28
gen_ai.usage.total_tokens
int
Total tokens used
70
Tool Span Attributes:
gen_ai.tool.name
string
Tool/function name
"search_database"
gen_ai.tool.input
string
Tool input (JSON)
"{\"query\": \"hotels\"}"
gen_ai.tool.output
string
Tool output (JSON)
"{\"results\": [...]}"
Custom User-Defined Attributes:
fiddler.session.user.{key}
Trace (all spans)
fiddler.session.user.user_id = "usr_123"
fiddler.span.user.{key}
Span (individual)
fiddler.span.user.department = "sales"
Troubleshooting
Common Issues
Problem: Application not showing as "Active"
Solutions:
Verify environment variables are set correctly
Check that
OTEL_EXPORTER_OTLP_ENDPOINTincludes your Fiddler instance URLEnsure
OTEL_EXPORTER_OTLP_HEADERScontains valid authorization token and application IDAdd console exporter to verify spans are being generated locally
Check network connectivity:
curl -I https://your-instance.fiddler.ai
Problem: ModuleNotFoundError for OpenTelemetry packages
Solutions:
# Install all required packages
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
# Verify installation
pip list | grep opentelemetryProblem: Spans not appearing in Fiddler
Solutions:
Verify required attributes are set:
# Every span MUST have these span.set_attribute("fiddler.span.type", "llm") # or "tool", "chain", "other" span.set_attribute("gen_ai.agent.name", "your_agent") span.set_attribute("gen_ai.agent.id", "agent_id")Check resource attributes:
# Verify application.id is set print(os.getenv('OTEL_RESOURCE_ATTRIBUTES'))Enable console exporter for debugging:
from opentelemetry.sdk.trace.export import ConsoleSpanExporter console_exporter = ConsoleSpanExporter() console_processor = BatchSpanProcessor(console_exporter) trace.get_tracer_provider().add_span_processor(console_processor)
Problem: Authentication errors (401 Unauthorized)
Solutions:
Regenerate your access token from Fiddler Settings > Credentials
Verify header format:
authorization=Bearer <token>,fiddler-application-id=<uuid>Ensure no extra spaces in header values
Check token hasn't expired
Problem: Invalid Application ID error
Solutions:
Copy Application ID directly from Fiddler UI
Verify UUID4 format:
550e8400-e29b-41d4-a716-446655440000Ensure no extra quotes or whitespace
Configuration Options
Basic Configuration
import os
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Set environment variables
os.environ['OTEL_EXPORTER_OTLP_ENDPOINT'] = 'https://your-instance.fiddler.ai'
os.environ['OTEL_EXPORTER_OTLP_HEADERS'] = 'authorization=Bearer <TOKEN>,fiddler-application-id=<UUID>'
os.environ['OTEL_RESOURCE_ATTRIBUTES'] = 'application.id=<UUID>'
# Initialize tracing
trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)
# Configure OTLP exporter
otlp_endpoint = os.getenv('OTEL_EXPORTER_OTLP_ENDPOINT') + '/v1/traces'
otlp_exporter = OTLPSpanExporter(endpoint=otlp_endpoint)
otlp_processor = BatchSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(otlp_processor)Advanced Configuration
High-Volume Applications (Batch Processing Tuning):
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# Customize batch processor settings
custom_processor = BatchSpanProcessor(
otlp_exporter,
max_queue_size=500, # Default: 2048
schedule_delay_millis=500, # Default: 5000
max_export_batch_size=50, # Default: 512
export_timeout_millis=10000 # Default: 30000
)
trace.get_tracer_provider().add_span_processor(custom_processor)Environment Variable Configuration:
# Batch processor environment variables
export OTEL_BSP_MAX_QUEUE_SIZE=500
export OTEL_BSP_SCHEDULE_DELAY_MILLIS=500
export OTEL_BSP_MAX_EXPORT_BATCH_SIZE=50
export OTEL_BSP_EXPORT_TIMEOUT=10000Sampling for Production (Reduce Volume):
from opentelemetry.sdk.trace import sampling
# Sample 10% of traces
sampler = sampling.TraceIdRatioBased(0.1)
# Create provider with sampler
provider = TracerProvider(sampler=sampler)
trace.set_tracer_provider(provider)Compression (Reduce Network Usage):
from opentelemetry.exporter.otlp.proto.http.trace_exporter import Compression
# Enable gzip compression
otlp_exporter = OTLPSpanExporter(
endpoint=otlp_endpoint,
compression=Compression.Gzip
)Using FiddlerClient Alternative (Simplified Setup):
Next Steps
Now that you have OpenTelemetry integration working:
Advanced Patterns: Download the Advanced OpenTelemetry Notebook for:
Multi-agent configurations
Conversation tracking across sessions
Custom user-defined attributes
Production-ready error handling
Comprehensive debugging techniques
Consider SDKs for Common Frameworks:
Fiddler LangGraph SDK - Auto-instrumentation for LangGraph/LangChain
Fiddler Strands SDK - Native Strands agent integration
Explore Fiddler Capabilities:
Production Deployment:
Review sampling strategies for cost optimization
Implement error handling and retry logic
Set up monitoring alerts in Fiddler dashboard
Configure custom attributes for your business context