Fiddler LangGraph SDK

Auto-instrument LangGraph agents with Fiddler's native SDK

Auto-instrument your LangGraph agent applications with OpenTelemetry-based tracing for comprehensive agentic observability. The Fiddler LangGraph SDK provides automatic monitoring of complex multi-agent workflows, capturing every step from thought to action to execution.

What You'll Need

  • Fiddler account (cloud or on-premises)

  • Python 3.10, 3.11, 3.12, or 3.13

  • LangGraph or LangChain application

  • Fiddler API key and application ID

Quick Start

Get monitoring in 3 steps:

# Step 1: Install
pip install fiddler-langgraph
# Step 2: Initialize the Fiddler client
from fiddler_langgraph import FiddlerClient
from fiddler_langgraph.tracing.instrumentation import LangGraphInstrumentor

fdl_client = FiddlerClient(
    api_key='your-api-key',
    application_id='your-app-id',  # Must be valid UUID4
    url='https://your-instance.fiddler.ai'
)

# Step 3: Instrument your application
instrumentor = LangGraphInstrumentor(fdl_client)
instrumentor.instrument()

# Your existing LangGraph code runs normally
# Traces will automatically be sent to Fiddler

That's it! Your agent traces are now flowing to Fiddler.

What Gets Monitored

The LangGraph SDK automatically captures:

Hierarchical Tracing

  • Application Level - Overall system performance and health

  • Session Level - User interaction and conversation flows

  • Agent Level - Individual agent behavior and decisions

  • Span Level - Tool calls, LLM requests, state transitions

Agent Lifecycle Stages

Every agent operation is tracked through five observable stages:

  1. Thought - Data ingestion, context retrieval, information interpretation

  2. Action - Planning processes, tool selection, decision-making

  3. Execution - Task performance, API calls, external integrations

  4. Reflection - Self-evaluation, learning signals, adaptation

  5. Alignment - Trust validation, safety checks, policy enforcement

Captured Data

  • Agent state transitions and decision points

  • Tool invocations with inputs and outputs

  • LLM API calls with prompts and responses

  • Execution times and latency metrics

  • Error traces and exception handling

  • Custom metadata and tags

Application Setup

Before instrumenting your application, you must create an application in Fiddler and obtain your Application ID:

1. Create Your Application in Fiddler

Log in to your Fiddler instance and navigate to GenAI Apps, then select Add Application.

GenAI applications list page with add application modal

2. Copy Your Application ID

After creating your application, copy the Application ID from the application details page. This must be a valid UUID4 format (for example, 550e8400-e29b-41d4-a716-446655440000). You'll need this for initialization.

GenAI applications list page showing copy app ID

3. Get Your Access Token

Go to Settings > Credentials and copy your access token. You'll need this for initialization.

Fiddler Settings- Credentials tab showing admin's access token

Detailed Setup

Installation

Framework Compatibility:

  • LangGraph: >= 0.3.28 and <= 1.0.2 OR LangChain: >= 0.3.28 and <= 1.0.2

  • Python: 3.10, 3.11, 3.12, or 3.13

Configuration

Using Environment Variables

You can use environment variables instead of hardcoding credentials:

Environment Variables Reference:

Variable
Description
Example

FIDDLER_API_KEY

Your Fiddler API key

fid_...

FIDDLER_APPLICATION_ID

Your application UUID4

550e8400-e29b-41d4-a716-446655440000

FIDDLER_URL

Your Fiddler instance URL

https://your-instance.fiddler.ai

Advanced Usage

Adding Context and Metadata

Enrich traces with custom context and conversation tracking:

Custom Span and Session Attributes

Add custom attributes to individual spans or entire sessions:

Sampling Configuration

Control trace sampling for high-volume applications:

For production deployments, consider these sampling strategies:

  • High-volume applications: Sample 5-10% (TraceIdRatioBased(0.05))

  • Development/testing: Sample 100% (default - no sampler specified)

  • Cost optimization: Sample 1-5% (TraceIdRatioBased(0.01))

Production Configuration

For high-volume production applications, configure span limits and batch processing:

Example Applications

Multi-Agent Travel Planner

View complete example notebook β†’

Customer Support Agent with Tools

Viewing Your Data

After running your instrumented application:

  1. Navigate to Fiddler UI - https://your-instance.fiddler.ai

  2. Select "GenAI Apps" - View your application

  3. Inspect traces - Drill down from application β†’ session β†’ agent β†’ span

  4. Analyze patterns - Use analytics to identify bottlenecks and errors

Key Metrics Tracked

  • Latency: P50, P95, P99 response times across agents

  • Error Rate: Percentage of failed agent executions

  • Token Usage: LLM token consumption per agent/session

  • Tool Calls: Frequency and success rate of tool invocations

  • State Transitions: Agent decision path analysis

Troubleshooting

Application Not Showing as "Active"

Check your configuration:

  • Ensure your application executes instrumented code

  • Verify your Fiddler access token and application ID are correct

  • Check network connectivity to your Fiddler instance

Enable console tracer for debugging:

Network Connectivity Issues

Verify connectivity to your Fiddler instance:

Check firewall settings:

  • Ensure HTTPS traffic on port 443 is allowed

  • Verify your Fiddler instance URL is correct

Import Errors

Problem: ModuleNotFoundError: No module named 'fiddler_langgraph'

Solution: Ensure you've installed the correct package:

Problem: ImportError: cannot import name 'LangGraphInstrumentor'

Solution: Ensure you have the correct import path:

Version Compatibility Issues

Verify your versions match requirements:

If you have version conflicts:

Invalid Application ID

Problem: ValueError: application_id must be a valid UUID4

Solution: Ensure your Application ID is in proper UUID4 format:

Copy the Application ID directly from the Fiddler dashboard to avoid formatting issues.

Agent Shows as "UNKNOWN_AGENT"

For LangChain applications, ensure you're setting the agent name in the config parameter:

Note: LangGraph applications automatically extract agent names. This manual configuration is only needed for LangChain applications.

OpenTelemetry Compatibility

The LangGraph SDK is built on OpenTelemetry Protocol (OTLP). The SDK uses standard OpenTelemetry components, allowing you to:

  • Integrate with existing observability infrastructure

  • Export traces to multiple backends (with custom configuration)

  • Use custom OTEL collectors and processors

All telemetry data follows OpenTelemetry semantic conventions for AI/ML workloads.

Migration Guides

From LangSmith

From Manual Tracing

If you've built custom tracing, migration is straightforward:

API Reference

Full SDK documentation:

Next Steps

Now that your application is instrumented:

  1. Explore the data: Check your Fiddler dashboard for traces, metrics, and performance insights

  2. Learn advanced features: See our Advanced Usage Guide for complex multi-agent scenarios

  3. Review the SDK reference: Check the Fiddler LangGraph SDK Reference for complete documentation

  4. Optimize for production: Review configuration options for high-volume applications

Last updated

Was this helpful?