LangGraph SDK Quick Start
Instrument your LangGraph or LangChain application with the Fiddler LangGraph SDK in under 10 minutes.
What You'll Learn
By completing this quick start, you'll:
Set up monitoring for a LangGraph or LangChain application
Send your first traces to Fiddler
Verify data collection in the Fiddler dashboard
Understand basic conversation tracking
Set Up Your Fiddler Application
Create your application in Fiddler
Log in to your Fiddler instance and navigate to GenAI Apps, then select Add Application.

Copy your Application ID
After creating your application, copy the Application ID from the application details page. This must be a valid UUID4 format (for example,
550e8400-e29b-41d4-a716-446655440000). You'll need this for Step 3.
Get Your Access Token
Go to Settings > Credentials and copy your access token. You'll need this for Step 3. Refer to the documentation for more details.

Instrument Your Application
Add the Fiddler LangGraph SDK to your LangGraph or LangChain application with just a few lines of code:
from fiddler_langgraph import FiddlerClient
from fiddler_langgraph.tracing.instrumentation import LangGraphInstrumentor
# Initialize the FiddlerClient. Replace the placeholder values below.
fdl_client = FiddlerClient(
api_key='<FIDDLER_API_TOKEN>', # Your access token
application_id='<FIDDLER_APPLICATION_ID>', # Application ID copied from UI in Step 1
url='<FIDDLER_URL>') # https://your-instance.fiddler.ai
)
# Instrument your application
instrumentor = LangGraphInstrumentor(fdl_client)
instrumentor.instrument()
# Invoke your agent here
# You MUST instrument your application BEFORE invoking
# Your existing LangGraph code runs normally
# Traces will automatically be sent to FiddlerAdd Context and Conversation Tracking
The main goal of context setting is to enrich the telemetry data sent to Fiddler:
from fiddler_langgraph.tracing.instrumentation import set_llm_context, set_conversation_id
import uuid
# Set descriptive context for LLM processing
set_llm_context(model, 'Customer support conversation')
# Set conversation ID for tracking multi-turn conversations
conversation_id = str(uuid.uuid4())
set_conversation_id(conversation_id)Run a Complete Example
Here's a complete working example to verify your setup:
import os
import uuid
from fiddler_langgraph import FiddlerClient
from fiddler_langgraph.tracing.instrumentation import (
LangGraphInstrumentor,
set_llm_context,
set_conversation_id
)
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
# Initialize the FiddlerClient. Replace the placeholder values below.
fdl_client = FiddlerClient(
api_key='<FIDDLER_API_TOKEN>', # Your access token
application_id='<FIDDLER_APPLICATION_ID>', # Application ID copied from UI in Step 1
url='<FIDDLER_URL>') # https://your-instance.fiddler.ai
)
# Instrument the application
instrumentor = LangGraphInstrumentor(fdl_client)
instrumentor.instrument()
# Create your agent
model = ChatOpenAI(model="gpt-4o-mini")
agent = create_react_agent(model, tools=[])
# Set descriptive context for this interaction
set_llm_context(model, "Quick start example conversation")
# Generate and set a conversation ID
conversation_id = str(uuid.uuid4())
set_conversation_id(conversation_id)
# Run your agent - automatically instrumented
result = agent.invoke({
"messages": [{"role": "user", "content": "Hello! How are you?"}]
})
print("Response received:", result)
print("Conversation ID:", conversation_id)Verify Monitoring is Working
Run your application using the example above or your own instrumented code
Check the Fiddler dashboard: Navigate to GenAI Apps in your Fiddler instance
Confirm active status: If Fiddler successfully receives telemetry, your application will show as Active

Success Criteria
You should see:
Application status changed to Active in the Fiddler dashboard
Trace data appearing within 1-2 minutes of running your example
Context labels match what you set in your code
Conversation ID visible in the trace details

a

a

a
Grant Team Access (optional)
Provide access to other users by assigning teams and users to the project that contains your applications. Managing permissions through teams is recommended as a best practice. For more access control details, refer to the Teams and Users Guide and the Role-based Access Guide.
Open the Settings page and select the Access tab
For both Users and Teams, select the "Edit" option to the right of the name
Add appropriate team members with the required permission levels
Troubleshooting
Common Issues and Solutions
Problem: Application shows as "Inactive"
Ensure your application executes instrumented code
Verify that your Fiddler access token and application ID are correct
Check network connectivity to your Fiddler instance
Enable console tracer to see if traces are being generated locally:
fdl_client = FiddlerClient(..., console_tracer=True)
Problem: Import errors
Verify Python version is 3.10 or higher
Ensure LangGraph is installed and compatible (versions >= 0.3.28 and < 0.7.0)
Try reinstalling the SDK:
pip uninstall fiddler-langgraph && pip install fiddler-langgraph
Problem: Invalid Application ID
Ensure your application ID is in proper UUID4 format
Copy the ID directly from the Fiddler dashboard
Verify the application exists in your Fiddler instance
Problem: Network connectivity issues
Test connectivity:
curl -I https://your-instance.fiddler.aiCheck firewall settings for HTTPS traffic on port 443
Verify your Fiddler instance URL is correct
Problem: Agent appears as "UNKNOWN_AGENT" in Fiddler UI
For LangChain applications, ensure you're setting the agent name in the config parameter
Example:
config={"configurable": {"agent_name": "your_agent_name"}}
Next Steps
Now that your application is instrumented:
Explore the data: Check your Fiddler dashboard for traces, metrics, and performance insights
Learn advanced features: See our Advanced Usage Tutorial for complex multi-agent scenarios
Review the SDK reference: Check the Fiddler LangGraph SDK Reference for complete documentation
Optimize for production: Review configuration options for high-volume applications
Configuration Options
Basic Configuration
from fiddler_langgraph import FiddlerClient
fdl_client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id", # Must be valid UUID4
url="https://your-instance.fiddler.ai"
)Advanced Configuration
Customize Limits for High-Volume Applications
Set limits for your events, spans, and associated attributes. This is helpful for tuning reporting data to manageable numbers for highly attributed and/or high-volume applications.
from opentelemetry.sdk.trace import SpanLimits
from fiddler_langgraph import FiddlerClient
# Custom span limits for high-volume applications
custom_limits = SpanLimits(
max_events=64, # Default: 32
max_links=64, # Default: 32
max_span_attributes=64, # Default: 32
max_event_attributes=64, # Default: 32
max_link_attributes=64, # Default: 32
max_span_attribute_length=4096, # Default: 2048
)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id", # Must be valid UUID4
url="https://your-instance.fiddler.ai",
span_limits=custom_limits,
)Sampling Traffic
Set a specific percentage for sampling the incoming data.
from opentelemetry.sdk.trace import sampling
from fiddler_langgraph import FiddlerClient
# Sampling strategy for production
sampler = sampling.TraceIdRatioBased(0.1) # Sample 10% of traces
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id", # Must be valid UUID4
url="https://your-instance.fiddler.ai",
sampler=sampler,
)Environment Variables for Batch Processing
Adjust the following environment variables that the FiddlerClient will use when processing the OpenTelemetry traffic.
import os
from fiddler_langgraph import FiddlerClient
# Configure batch processing
os.environ['OTEL_BSP_MAX_QUEUE_SIZE'] = '500' # Default: 100
os.environ['OTEL_BSP_SCHEDULE_DELAY_MILLIS'] = '500' # Default: 1000
os.environ['OTEL_BSP_MAX_EXPORT_BATCH_SIZE'] = '50' # Default: 10
os.environ['OTEL_BSP_EXPORT_TIMEOUT'] = '10000' # Default: 5000
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id", # Must be valid UUID4
url="https://your-instance.fiddler.ai",
)Compression Options
The SDK supports data compression to help reduce the overall data volume transmitted over the network. This can help improve network latency.
from opentelemetry.exporter.otlp.proto.http.trace_exporter import Compression
from fiddler_langgraph import FiddlerClient
# Enable gzip compression (default, recommended for production)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id", # Must be valid UUID4
url="https://your-instance.fiddler.ai",
compression=Compression.Gzip,
)
# Disable compression (useful for debugging or local development)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id", # Must be valid UUID4
url="https://your-instance.fiddler.ai",
compression=Compression.NoCompression,
)
# Use deflate compression (alternative to gzip)
client = FiddlerClient(
api_key="your-api-key",
application_id="your-app-id", # Must be valid UUID4
url="https://your-instance.fiddler.ai",
compression=Compression.Deflate,
)Environment Variables Reference
Configure OpenTelemetry batch processor behavior through environment variables:
OTEL_BSP_MAX_QUEUE_SIZE
100
Maximum spans in queue before export
OTEL_BSP_SCHEDULE_DELAY_MILLIS
1000
Delay between batch exports (milliseconds)
OTEL_BSP_MAX_EXPORT_BATCH_SIZE
10
Maximum spans exported per batch
OTEL_BSP_EXPORT_TIMEOUT
5000
Export timeout (milliseconds)
FIDDLER_API_KEY
-
Your Fiddler API key (recommended for production)
FIDDLER_APPLICATION_ID
-
Your application UUID4 (recommended for production)
FIDDLER_URL
-
Your Fiddler instance URL (recommended for production)
Example: Tuning for high-volume applications
import os
os.environ['OTEL_BSP_MAX_QUEUE_SIZE'] = '500'
os.environ['OTEL_BSP_SCHEDULE_DELAY_MILLIS'] = '500'
os.environ['OTEL_BSP_MAX_EXPORT_BATCH_SIZE'] = '50'
os.environ['OTEL_BSP_EXPORT_TIMEOUT'] = '10000'LangChain Application Support
The SDK supports both LangGraph and LangChain applications. While agent names are automatically extracted from LangGraph applications by the SDK, LangChain applications need the agent name to be explicitly set using the configuration parameter:
from langchain_core.output_parsers import StrOutputParser
# Define your LangChain runnable using LangChain Expression Language (LCEL)
chat_app_chain = prompt | llm | StrOutputParser()
# Run with agent name configuration
response = chat_app_chain.invoke({
"input": user_input,
"history": messages,
}, config={"configurable": {"agent_name": "service_chatbot"}})Troubleshooting
Common Installation Issues
Problem: ModuleNotFoundError: No module named 'fiddler_langgraph'
Solution: Ensure you've installed the correct package:
pip install fiddler*langgraph
Problem: Version conflicts with existing packages
Solution: Use a virtual environment or update conflicting packages
Common Configuration Issues
Problem: ValueError: application_id must be a valid UUID4
Solution: Ensure your Application ID is a valid UUID4 format (e.g.,
550e8400-e29b-41d4-a716-446655440000)
Example:
# ❌ This will fail
client = FiddlerClient(
api_key="your-access-token",
application_id="invalid-id", # Not a valid UUID4
url="https://instance.fiddler.ai"
)
# Raises: ValueError: application_id must be a valid UUID4 string
# ✅ Correct format
client = FiddlerClient(
api_key="your-access-token",
application_id="550e8400-e29b-41d4-a716-446655440000", # Valid UUID4
url="https://instance.fiddler.ai"
)Problem: ValueError: URL must have a valid scheme and netloc
Solution: Ensure your URL includes the protocol (e.g.,
https://your-instance.fiddler.ai)
Problem: Connection errors or timeouts
Solution: Check your network connectivity and Fiddler instance URL
Example: Handling connection failures gracefully
try:
client = FiddlerClient(
api_key=os.getenv("FIDDLER_API_KEY"),
application_id=os.getenv("FIDDLER_APPLICATION_ID"),
url="https://your-instance.fiddler.ai"
)
instrumentor = LangGraphInstrumentor(client)
instrumentor.instrument()
except Exception as e:
print(f"Fiddler instrumentation failed: {e}")
# Your application continues running without tracingProblem: Debugging trace generation
Solution: Enable console tracer for local debugging:
fdl_client = FiddlerClient( api_key="your-access-token", application_id="your-app-id", console_tracer=True # Prints spans to console (does NOT send to Fiddler) )
When console_tracer=True, traces are printed locally and NOT sent to Fiddler. Use only for debugging.
Problem: ImportError: cannot import name 'LangGraphInstrumentor'
Solution: Ensure you have the correct import path:
from fiddler_langgraph.tracing.instrumentation import LangGraphInstrumentor
Verification Issues
Problem: Application not showing as "Active" in Fiddler
Solution: Check the following:
Ensure your application executes instrumented code
Verify that your Fiddler access token and application ID are correct
Check network connectivity to your Fiddler instance
Enable console tracer to see if traces are being generated locally
Next Steps
Now that your application is instrumented:
Explore the data: Check your Fiddler dashboard for traces and metrics
Learn advanced features: See our Tutorial: Advanced Usage for more complex scenarios
Review the SDK: Check the Fiddler LangGraph SDK Reference for complete documentation
Optimize for production: Review configuration options for high-volume applications
❓ Questions? Talk to a product expert or request a demo.
💡 Need help? Contact us at [email protected].