Fiddler LangGraph SDK
The Fiddler LangGraph SDK provides powerful, real-time observability for your GenAI applications built with LangGraph or LangChain. By leveraging the industry-standard OpenTelemetry framework, our SDK offers deep insights into your AI agent workflows with minimal integration effort. Use this SDK to automatically instrument your LangGraph applications, enabling you to monitor, analyze, and debug complex agentic behaviors directly within the Fiddler platform.
This technical reference provides comprehensive details on the SDK's components, including the FiddlerClient
, LangGraphInstrumentor
, configuration options, and utility functions to help you get the most out of Fiddler's agentic monitoring capabilities.
Installation
pip install fiddler-langgraph
Version Compatibility:
Python: 3.10, 3.11, 3.12, or 3.13
LangGraph: >= 0.3.28 and < 0.5.2
LangChain: >= 0.3.26 (automatically installed)
Optional Dependencies
# Include example dependencies
pip install fiddler-langgraph[examples]
# Include development dependencies
pip install fiddler-langgraph[dev]
Core Components
FiddlerClient
The main client for instrumenting Generative AI applications with Fiddler observability.
Overview
FiddlerClient
is the entry point for instrumenting Generative AI applications built with LangGraph. It configures and manages the OpenTelemetry tracer that sends telemetry data to the Fiddler platform for monitoring, analysis, and debugging of your AI agents and workflows.
Constructor
from fiddler_langgraph import FiddlerClient
from opentelemetry.exporter.otlp.proto.http.trace_exporter import Compression
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
url="https://your-instance.fiddler.ai",
console_tracer=False,
span_limits=None,
sampler=None,
compression=Compression.Gzip
)
Parameters
api_key
str
✓
-
API key for authenticating with Fiddler (Bearer token)
application_id
str
✓
-
UUID4 identifier for your application
url
str
✓
-
Fiddler backend URL (OTLP HTTP endpoint)
console_tracer
bool
✗
False
Enable console output for debugging
span_limits
SpanLimits
✗
Restrictive defaults
OpenTelemetry span limits configuration
sampler
Sampler
✗
None
OpenTelemetry sampling configuration
compression
Compression
✗
Compression.Gzip
OTLP export compression type (Gzip
, Deflate
, NoCompression
)
Methods
get_tracer()
Returns an OpenTelemetry tracer instance for creating spans. Initializes the tracer on the first call.
Parameters:
None
Returns:
Tracer
: OpenTelemetry tracer instance
Raises:
RuntimeError
: If tracer initialization fails
Basic Example
from fiddler_langgraph import FiddlerClient
# Initialize the FiddlerClient
fdl_client = FiddlerClient(
api_key="your-fiddler-access-token",
application_id="your-uuid4-application-id", # Must be valid UUID4
url="https://your-instance.fiddler.ai"
)
LangGraphInstrumentor
Automatically instruments LangGraph applications to capture execution traces.
Overview
LangGraphInstrumentor
provides automatic instrumentation for LangGraph applications, capturing detailed execution traces without requiring manual span creation. It hooks into LangGraph's execution flow to provide comprehensive observability.
Constructor
from fiddler_langgraph.tracing.instrumentation import LangGraphInstrumentor
instrumentor = LangGraphInstrumentor(client)
Parameters
client
FiddlerClient
✓
Configured FiddlerClient instance
Methods
instrument()
Enables automatic instrumentation for LangGraph applications.
Parameters:
None
Returns:
None
Effects:
Instruments all LangGraph execution components
Captures spans for chains, tools, LLMs, and retrievers
Automatically tracks agent workflows and decision flows
uninstrument()
Disables automatic instrumentation.
Parameters:
None
Returns:
None
Complete Example
from fiddler_langgraph import FiddlerClient
from fiddler_langgraph.tracing.instrumentation import LangGraphInstrumentor
# Initialize client
fdl_client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
url="https://your-instance.fiddler.ai"
)
# Instrument application
instrumentor = LangGraphInstrumentor(fdl_client)
instrumentor.instrument()
# Your LangGraph code runs normally with automatic tracing
LangChain Application Support
The SDK supports both LangGraph and LangChain applications. While agent names are automatically extracted from LangGraph applications by the SDK, LangChain applications need the agent name to be explicitly set using the configuration parameter:
from langchain_core.output_parsers import StrOutputParser
# Define your LangChain runnable using LangChain Expression Language (LCEL)
chat_app_chain = prompt | llm | StrOutputParser()
# Run with agent name configuration
response = chat_app_chain.invoke({
"input": user_input,
"history": messages,
}, config={"configurable": {"agent_name": "service_chatbot"}})
Important: If you don't provide an agent name for LangChain applications, it will appear as "UNKNOWN_AGENT" in the Fiddler UI. All other features including conversation ID, LLM context, and attribute structure work the same as with LangGraph.
Utility Functions
set_llm_context
Enriches LLM-specific spans with additional context information.
Overview
When you instrument your application, Fiddler creates traces and spans to track the execution flow. This function allows you to add a custom context string to the specific spans associated with a particular language model. This context does not alter the model's behavior or change the prompts it receives. Instead, it serves as a label or note that will appear in your Fiddler dashboard, helping you better understand or categorize the model's operations during later analysis.
Function Signature
from fiddler_langgraph.tracing.instrumentation import set_llm_context
set_llm_context(model, context)
Parameters
model
object
✓
LangChain LLM instance (e.g., ChatOpenAI
, ChatAnthropic
)
context
str
✓
Descriptive context string for the model
Returns
None
Examples
from langchain_openai import ChatOpenAI
from fiddler_langgraph.tracing.instrumentation import set_llm_context
# Basic usage
model = ChatOpenAI(model='gpt-4o-mini')
set_llm_context(model, "Customer Sentiment Analyzer")
# Use cases for different model roles
intent_model = ChatOpenAI(model='gpt-4o-mini')
set_llm_context(intent_model, "Intent Detection Model")
summary_model = ChatOpenAI(model='gpt-4o')
set_llm_context(summary_model, "Document Summarization - Legal Contracts")
classification_model = ChatOpenAI(model='gpt-4o-mini')
set_llm_context(classification_model, "Content Classification - Safety Filter")
set_conversation_id
Enables end-to-end tracing of multi-step workflows and conversations.
Overview
The primary purpose of set_conversation_id
is to enable end-to-end tracing of a multi-step workflow. Modern agentic applications often involve a complex sequence of events to fulfill a single user request. The result in your Fiddler dashboard is that you can instantly filter for and view the entire, ordered sequence of operations that constituted a single conversation or task. This is crucial for debugging complex failures, analyzing latency across an entire workflow, and understanding the agent's behavior from start to finish.
Function Signature
from fiddler_langgraph.tracing.instrumentation import set_conversation_id
set_conversation_id(conversation_id)
Parameters
conversation_id
str
✓
Unique identifier for the conversation session
Returns
None
Examples
from langgraph.prebuilt import create_react_agent
from fiddler_langgraph.tracing.instrumentation import set_conversation_id
import uuid
# Basic usage
agent = create_react_agent(model, tools=[])
conversation_id = str(uuid.uuid4())
set_conversation_id(conversation_id)
agent.invoke({"messages": [{"role": "user", "content": "Write me a novel"}]})
# Multi-turn conversation tracking
def handle_conversation(user_id, session_id):
# Create a unique conversation ID combining user and session
conversation_id = f"{user_id}_{session_id}_{uuid.uuid4()}"
set_conversation_id(conversation_id)
# All subsequent interactions will be grouped under this ID
return conversation_id
# Different conversation types
business_conversation_id = f"business_{uuid.uuid4()}"
support_conversation_id = f"support_{uuid.uuid4()}"
Configuration Options
Basic Configuration
from fiddler_langgraph import FiddlerClient
fdl_client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id", # Must be valid UUID4
url="https://your-instance.fiddler.ai"
)
Advanced Configuration
Custom Span Limits
Configure span limits for high-volume applications to control resource usage and data volume.
from fiddler_langgraph import FiddlerClient
from opentelemetry.sdk.trace import SpanLimits
# Custom span limits for high-volume applications
custom_limits = SpanLimits(
max_events=64, # Default: 32
max_links=64, # Default: 32
max_span_attributes=64, # Default: 32
max_event_attributes=64, # Default: 32
max_link_attributes=64, # Default: 32
max_span_attribute_length=4096, # Default: 2048
)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
span_limits=custom_limits
)
Sampling Configuration
Control what percentage of traces are captured and sent to Fiddler.
from fiddler_langgraph import FiddlerClient
from opentelemetry.sdk.trace.sampling import TraceIdRatioBased
# Sample 10% of traces for production environments
sampler = TraceIdRatioBased(0.1)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
sampler=sampler
)
Compression Options
Configure data compression to optimize network usage.
from fiddler_langgraph import FiddlerClient
from opentelemetry.exporter.otlp.proto.http.trace_exporter import Compression
# Enable gzip compression (default, recommended for production)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
compression=Compression.Gzip
)
# Disable compression (useful for debugging)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
compression=Compression.NoCompression
)
# Use deflate compression (alternative to gzip)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id",
compression=Compression.Deflate
)
Environment Variables for Batch Processing
Configure OpenTelemetry batch processor behavior through environment variables.
import os
# Configure batch processing parameters
os.environ['OTEL_BSP_MAX_QUEUE_SIZE'] = '500' # Default: 100
os.environ['OTEL_BSP_SCHEDULE_DELAY_MILLIS'] = '500' # Default: 1000
os.environ['OTEL_BSP_MAX_EXPORT_BATCH_SIZE'] = '50' # Default: 10
os.environ['OTEL_BSP_EXPORT_TIMEOUT'] = '10000' # Default: 5000
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id"
)
Production Configuration Example
import os
from opentelemetry.sdk.trace import SpanLimits, sampling
from opentelemetry.exporter.otlp.proto.http.trace_exporter import Compression
from fiddler_langgraph import FiddlerClient
# Configure batch processing before initializing FiddlerClient
os.environ['OTEL_BSP_MAX_QUEUE_SIZE'] = '500' # Default: 100
os.environ['OTEL_BSP_SCHEDULE_DELAY_MILLIS'] = '500' # Default: 1000
os.environ['OTEL_BSP_MAX_EXPORT_BATCH_SIZE'] = '50' # Default: 10
os.environ['OTEL_BSP_EXPORT_TIMEOUT'] = '10000' # Default: 5000
# Production-optimized configuration
production_limits = SpanLimits(
max_events=128,
max_links=64,
max_span_attributes=128,
max_event_attributes=64,
max_link_attributes=32,
max_span_attribute_length=8192,
)
# Sample 5% of traces in production
production_sampler = sampling.TraceIdRatioBased(0.05)
client = FiddlerClient(
api_key=os.getenv("FIDDLER_API_KEY"),
application_id=os.getenv("FIDDLER_APPLICATION_ID"),
url=os.getenv("FIDDLER_URL"),
console_tracer=False,
span_limits=production_limits,
sampler=production_sampler,
compression=Compression.Gzip,
)
Span Attributes Reference
The Fiddler LangGraph SDK captures standardized OpenTelemetry attributes for comprehensive observability.
Common Attributes
gen_ai.agent.name
str
Name of the AI agent
gen_ai.agent.id
str
Unique identifier for the agent (format: trace_id:agent_name
)
gen_ai.conversation.id
str
Session/conversation identifier
fiddler.span.type
str
Type of span (chain
, tool
, llm
, other
)
LLM-Specific Attributes
gen_ai.llm.input.system
str
System prompt content
gen_ai.llm.input.user
str
User input/prompt
gen_ai.llm.output
str
Model response
gen_ai.llm.context
str
Additional context provided via set_llm_context()
gen_ai.llm.model
str
Model name (e.g., "gpt-4o-mini")
gen_ai.llm.token_count
int
Token usage information
Tool-Specific Attributes
gen_ai.tool.name
str
Name of the tool being invoked
gen_ai.tool.input
str
Tool input parameters (JSON)
gen_ai.tool.output
str
Tool execution results (JSON)
Performance Attributes
duration_ms
float
Span duration in milliseconds
fiddler.error.message
str
Error message if span failed
fiddler.error.type
str
Error type classification
Environment Variables Reference
OTEL_BSP_MAX_QUEUE_SIZE
100
Maximum spans in queue
OTEL_BSP_SCHEDULE_DELAY_MILLIS
1000
Delay between batch exports (ms)
OTEL_BSP_MAX_EXPORT_BATCH_SIZE
10
Maximum spans per batch
OTEL_BSP_EXPORT_TIMEOUT
5000
Export timeout (ms)
FIDDLER_API_KEY
-
Fiddler API key (recommended)
FIDDLER_APPLICATION_ID
-
Application UUID4 (recommended)
FIDDLER_URL
-
Fiddler instance URL (recommended)
Error Handling
Common Error Scenarios
Invalid Application ID
# ❌ Invalid UUID4 format
client = FiddlerClient(
api_key="valid-key",
application_id="invalid-id", # Not UUID4 format
url="https://instance.fiddler.ai"
)
# Raises: ValueError: application_id must be a valid UUID4 string
Network Connectivity Issues
try:
client = FiddlerClient(
api_key="your-key",
application_id="550e8400-e29b-41d4-a716-446655440000",
url="https://unreachable-instance.fiddler.ai"
)
instrumentor = LangGraphInstrumentor(client)
instrumentor.instrument()
except Exception as e:
print(f"Connection error: {e}")
# Handle gracefully - your application continues without tracing
Debugging with Console Tracer
# Enable console output to debug trace generation
client = FiddlerClient(
api_key="your-key",
application_id="your-app-id",
url="https://your-instance.fiddler.ai",
console_tracer=True # Prints spans to console instead of to Fiddler
)
Best Practices
Development Environment
# Development configuration with verbose logging
client = FiddlerClient(
api_key=os.getenv("FIDDLER_API_KEY"),
application_id=os.getenv("FIDDLER_APPLICATION_ID"),
url=os.getenv("FIDDLER_URL"),
console_tracer=True, # Enable for debugging
sampler=None, # Capture all traces
)
Production Environment
# Production configuration optimized for performance
client = FiddlerClient(
api_key=os.getenv("FIDDLER_API_KEY"),
application_id=os.getenv("FIDDLER_APPLICATION_ID"),
url=os.getenv("FIDDLER_URL"),
console_tracer=False, # Disable console output
sampler=sampling.TraceIdRatioBased(0.05), # Sample 5% of traces
compression=Compression.Gzip, # Enable compression
span_limits=production_limits # Conservative limits
)
Context and Conversation Management
# Set meaningful context labels
set_llm_context(model, "Customer Support - Tier 1")
set_llm_context(model, "Content Generation - Marketing Copy")
set_llm_context(model, "Data Analysis - Financial Reports")
# Use structured conversation IDs
conversation_id = f"{user_id}_{session_type}_{timestamp}_{uuid.uuid4()}"
set_conversation_id(conversation_id)
SDK Limitations
Current Limitations
Framework Support: Currently supports LangGraph only; other frameworks require direct Client API usage
Protocol Support: Uses HTTP-based OTLP; gRPC support planned for future releases
Attribute Limits: Default limits prevent oversized spans; configurable for high-volume use cases
Breaking Changes: As an alpha release, future versions may include breaking changes
Performance Considerations
High-volume applications: Increase span limits and batch processing parameters
Low-latency requirements: Decrease batch schedule delay
Memory constraints: Use restrictive span limits and smaller batch sizes
Production environments: Use appropriate sampling strategies to control data volume
Compatibility
Supported Python Versions
Python 3.10, 3.11, 3.12, 3.13
Required Dependencies
opentelemetry-api
(1.34.1)opentelemetry-sdk
(1.34.1)opentelemetry-instrumentation
(0.55b1)opentelemetry-exporter-otlp-proto-http
(1.34.1)langgraph
(>= 0.3.28, < 0.5.2)langchain
(>= 0.3.26)
See Also
Quick Start Guide: Get started in under 10 minutes
Advanced Tutorial: Complex multi-agent scenarios
OpenTelemetry Python Documentation: Underlying instrumentation framework
Fiddler Platform Documentation: Complete platform capabilities