Fiddler LangGraph SDK

Instrument LangGraph agents and custom AI applications with Fiddler's native SDK

PyPIarrow-up-right

Instrument your LangGraph agent applications and custom AI workflows with OpenTelemetry-based tracing for comprehensive agentic observability. The Fiddler LangGraph SDK provides three instrumentation approaches — auto-instrumentation for LangGraph workflows, decorator-based tracing for custom functions, and manual span creation for fine-grained control — capturing every step from thought to action to execution.

What you'll need

  • Fiddler account (cloud or on-premises)

  • Python 3.10, 3.11, 3.12, or 3.13

  • LangGraph or LangChain application

  • Fiddler API key and application ID

Quick start

Get monitoring in 3 steps:

# Step 1: Install
pip install fiddler-langgraph
# Step 2: Initialize the Fiddler client
from fiddler_langgraph import FiddlerClient
from fiddler_langgraph.tracing.instrumentation import LangGraphInstrumentor

fdl_client = FiddlerClient(
    api_key='your-api-key',
    application_id='your-app-id',  # Must be valid UUID4
    url='https://your-instance.fiddler.ai'
)

# Step 3: Instrument your application
instrumentor = LangGraphInstrumentor(fdl_client)
instrumentor.instrument()

# Your existing LangGraph code runs normally
# Traces will automatically be sent to Fiddler

That's it! Your agent traces are now flowing to Fiddler.

circle-info

This Quick Start uses auto-instrumentation for LangGraph applications. For custom functions or fine-grained control, see Instrumentation Methods below.

What gets monitored

The LangGraph SDK automatically captures:

Hierarchical tracing

  • Application Level - Overall system performance and health

  • Session Level - User interaction and conversation flows

  • Agent Level - Individual agent behavior and decisions

  • Span Level - Tool calls, LLM requests, state transitions

Agent lifecycle stages

Every agent operation is tracked through five observable stages:

  1. Thought - Data ingestion, context retrieval, information interpretation

  2. Action - Planning processes, tool selection, decision-making

  3. Execution - Task performance, API calls, external integrations

  4. Reflection - Self-evaluation, learning signals, adaptation

  5. Alignment - Trust validation, safety checks, policy enforcement

Captured data

  • Agent state transitions and decision points

  • Tool invocations with inputs and outputs

  • LLM API calls with prompts and responses

  • Execution times and latency metrics

  • Error traces and exception handling

  • Custom metadata and tags

Application setup

Before instrumenting your application, you must create an application in Fiddler and obtain your Application ID:

1. Create your application in Fiddler

Log in to your Fiddler instance and navigate to GenAI Apps, then select Add Application.

GenAI applications list page with add application modal

2. Copy your Application ID

After creating your application, copy the Application ID from the application details page. This must be a valid UUID4 format (for example, 550e8400-e29b-41d4-a716-446655440000). You'll need this for initialization.

GenAI applications list page showing copy app ID

3. Get your access token

Go to Settings > Credentials and copy your access token. You'll need this for initialization.

Fiddler Settings- Credentials tab showing admin's access token

Detailed setup

Installation

Framework Compatibility:

  • LangGraph: >= 0.3.28 and <= 1.1.0 OR LangChain: >= 0.3.28 and <= 1.1.0

  • Python: 3.10, 3.11, 3.12, or 3.13

  • OpenTelemetry: API and SDK >= 1.19.0 and <= 1.39.1 (installed automatically)

Configuration

Using environment variables

You can use environment variables instead of hardcoding credentials:

Environment Variables Reference:

Variable
Description
Example

FIDDLER_API_KEY

Your Fiddler API key

fid_...

FIDDLER_APPLICATION_ID

Your application UUID4

550e8400-e29b-41d4-a716-446655440000

FIDDLER_URL

Your Fiddler instance URL

https://your-instance.fiddler.ai

Instrumentation methods

The Fiddler LangGraph SDK provides three instrumentation approaches. Choose the one that fits your application:

Approach
Best For
Key API

LangGraph and LangChain applications

LangGraphInstrumentor

Custom Python functions, mixed workflows

@trace(), get_current_span()

Fine-grained span lifecycle control

start_as_current_span(), start_span()

circle-info

You can combine all three approaches in the same application. For example, use auto-instrumentation for your LangGraph graph and decorators for custom helper functions that the graph calls.

Auto-Instrumentation

Auto-instrumentation captures LangGraph and LangChain workflows automatically. Initialize the instrumentor once, and all graph invocations produce traces with no additional code changes.

When to use: Your application uses LangGraph StateGraph or LangChain runnables and you want comprehensive tracing with zero instrumentation code.

See the Quick Start section above for a complete walkthrough, or the Advanced Usage section for context enrichment and production configuration.

Decorator-based instrumentation

Use the @trace() decorator to instrument individual Python functions. This is the recommended approach for custom functions that are not part of a LangGraph graph, such as standalone LLM calls, tool implementations, or orchestration logic.

When to use: You have custom Python functions — LLM wrappers, tool implementations, or orchestration logic — that you want to trace with full control over span metadata.

@trace() Arguments

Argument
Type
Default
Description

name

str

Function name

Custom span name

as_type

str

"span"

Span type: "span", "generation", "chain", or "tool"

capture_input

bool

True

Automatically capture function arguments as span input

capture_output

bool

True

Automatically capture return value as span output

model

str

None

LLM model name (sets gen_ai.request.model)

system

str

None

LLM provider such as "openai" or "anthropic" (sets gen_ai.system)

user_id

str

None

User identifier

version

str

None

Service version string

client

FiddlerClient

None

Explicit client instance (defaults to the global singleton)

Accessing the current span

Inside a decorated function, call get_current_span() to access the active span and add metadata:

Pass as_type to get a type-specific wrapper with semantic helper methods. See Span Types and Helper Methods for the full list.

circle-info

Always check if span: before calling helper methods. get_current_span() returns None if no Fiddler span is active — for example, during unit tests or when the client is not initialized.

Async support

The @trace() decorator works with both sync and async functions. No additional configuration is needed:

Automatic parent-child relationships

Nested decorated functions create proper span hierarchies automatically. The outer function becomes the parent span, and inner calls become child spans:

Manual instrumentation

Create spans manually using context managers or explicit start/end calls. This gives you full control over span lifecycle — useful for dynamic span creation, conditional instrumentation, or code where decorator syntax does not apply.

Context manager (automatic lifecycle)

Use start_as_current_span() to create a span that ends automatically when the block exits:

Explicit span control

Use start_span() when you need to manage span lifecycle manually — for example, in callback-driven or event-based code:

circle-exclamation

When to use: You need explicit control over when spans start and end — for example, in callback-driven code, conditional spans, or complex control flow where decorators do not fit.

Span types and helper methods

Both decorator and manual instrumentation support four span types. Set the as_type parameter to select a type, which determines which semantic helper methods are available on the span wrapper.

Type
Wrapper Class
Use For

"span"

FiddlerSpan

Generic operations, orchestration

"generation"

FiddlerGeneration

LLM calls (prompts, completions, token usage)

"chain"

FiddlerChain

Multi-step workflows, processing chains

"tool"

FiddlerTool

Tool or function calls (name, input, output)

Common methods (all types)

Method
Description

set_input(data)

Set input data (auto-serializes dicts and lists to JSON)

set_output(data)

Set output data (auto-serializes dicts and lists to JSON)

set_attribute(key, value)

Set a custom span attribute

set_agent_name(name)

Set the agent name (gen_ai.agent.name)

set_agent_id(id)

Set the agent ID (gen_ai.agent.id)

set_conversation_id(id)

Set the conversation ID (gen_ai.conversation.id)

record_exception(exception)

Record an error on the span

Generation methods (FiddlerGeneration)

Method
Sets Attribute

set_model(name)

gen_ai.request.model

set_system(provider)

gen_ai.system

set_system_prompt(text)

gen_ai.llm.input.system

set_user_prompt(text)

gen_ai.llm.input.user

set_completion(text)

gen_ai.llm.output

set_usage(input_tokens, output_tokens, total_tokens)

gen_ai.usage.*

set_context(text)

gen_ai.llm.context

set_messages(messages)

gen_ai.input.messages

set_output_messages(messages)

gen_ai.output.messages

set_tool_definitions(definitions)

gen_ai.tool.definitions

Tool methods (FiddlerTool)

Method
Sets Attribute

set_tool_name(name)

gen_ai.tool.name

set_tool_input(data)

gen_ai.tool.input

set_tool_output(data)

gen_ai.tool.output

set_tool_definitions(definitions)

gen_ai.tool.definitions

For complete API documentation, see the LangGraph SDK API Referencearrow-up-right.

Context isolation

The Fiddler LangGraph SDK maintains its own isolated OpenTelemetry context. Fiddler traces do not interfere with other OpenTelemetry tracers that may be active in your application, and vice versa.

Each FiddlerClient creates a private Context instance. All span creation, parent-child linking, and context propagation happen within this isolated context. When you use @trace(), start_as_current_span(), or start_span(), the SDK manages context attachment and detachment automatically.

You can verify whether a span belongs to Fiddler using is_fiddler_span():

This isolation matters if your application uses other OpenTelemetry-based observability tools (such as Datadog, Honeycomb, or custom OTel exporters). Fiddler traces remain completely separate, so you can run multiple tracing systems side by side without conflicts.

Global client pattern

The Fiddler SDK uses a singleton pattern for FiddlerClient. The first client created in your process is automatically registered as the global default. Retrieve it anywhere using get_client():

The @trace() decorator uses get_client() internally, so you do not need to pass a client to each decorated function. As long as a FiddlerClient has been created somewhere in your application, all @trace() decorators and get_current_span() calls work automatically.

circle-info

There is no set_current_client() function. The singleton is set automatically during FiddlerClient initialization. If you create multiple clients, only the first one becomes the global default. Pass an explicit client argument to @trace() to use a different client.

Advanced usage

Adding context and metadata

Enrich traces with custom context and conversation tracking:

Custom span and session attributes

Add custom attributes to individual spans or entire sessions:

Sampling configuration

Control trace sampling for high-volume applications:

For production deployments, consider these sampling strategies:

  • High-volume applications: Sample 5-10% (TraceIdRatioBased(0.05))

  • Development/testing: Sample 100% (default - no sampler specified)

  • Cost optimization: Sample 1-5% (TraceIdRatioBased(0.01))

Production configuration

For high-volume production applications, configure span limits and batch processing:

Flush and shutdown handling

The SDK uses OpenTelemetry's batch span processor, which buffers spans in memory and exports them on a schedule. To avoid losing buffered spans when your process exits, use explicit flush and shutdown:

  • Process exit: The SDK registers an atexit handler that flushes and shuts down the tracer when the process exits. For short scripts or environments where atexit may not run (e.g. SIGKILL, forked processes), call force_flush() and shutdown() explicitly—for example in a try/finally or signal handler.

  • Long-running servers (e.g. FastAPI, uvicorn): On graceful shutdown (SIGTERM), call the Fiddler client's shutdown so pending spans are exported before the process exits. From async code use ashutdown() (or aflush() then ashutdown()) so the event loop is not blocked; the sync force_flush() and shutdown() can block for up to the flush timeout (default 30 seconds).

Sync (scripts or signal handler):

Async (e.g. FastAPI/uvicorn lifespan):

Context manager (scripts): Use with FiddlerClient(...) as client: so shutdown() is called automatically when the block exits.

Example applications

Multi-agent travel planner

View the Advanced Observability Notebook →arrow-up-right | Custom Instrumentation Notebook →arrow-up-right

Customer support agent with tools

Viewing your data

After running your instrumented application:

  1. Navigate to Fiddler UI - https://your-instance.fiddler.ai

  2. Select "GenAI Apps" - View your application

  3. Inspect traces - Drill down from application → session → agent → span

  4. Analyze patterns - Use analytics to identify bottlenecks and errors

Key metrics tracked

  • Latency: P50, P95, P99 response times across agents

  • Error Rate: Percentage of failed agent executions

  • Token Usage: LLM token consumption per agent/session

  • Tool Calls: Frequency and success rate of tool invocations

  • State Transitions: Agent decision path analysis

Troubleshooting

Application not showing as "Active"

Check your configuration:

  • Ensure your application executes instrumented code

  • Verify your Fiddler access token and application ID are correct

  • Check network connectivity to your Fiddler instance

Enable console tracer for debugging:

circle-exclamation

Network connectivity issues

Verify connectivity to your Fiddler instance:

Check firewall settings:

  • Ensure HTTPS traffic on port 443 is allowed

  • Verify your Fiddler instance URL is correct

Import errors

Problem: ModuleNotFoundError: No module named 'fiddler_langgraph'

Solution: Ensure you've installed the correct package:

Problem: ImportError: cannot import name 'LangGraphInstrumentor'

Solution: Ensure you have the correct import path:

Version compatibility issues

Verify your versions match requirements:

If you have version conflicts:

Invalid application ID

Problem: ValueError: application_id must be a valid UUID4

Solution: Ensure your Application ID is in proper UUID4 format:

Copy the Application ID directly from the Fiddler dashboard to avoid formatting issues.

Agent shows as "UNKNOWN_AGENT"

For LangChain applications, ensure you're setting the agent name in the config parameter:

circle-info

Note: LangGraph applications automatically extract agent names. This manual configuration is only needed for LangChain applications.

OpenTelemetry compatibility

The LangGraph SDK is built on OpenTelemetry Protocol (OTLP). The SDK uses standard OpenTelemetry components, allowing you to:

  • Integrate with existing observability infrastructure

  • Export traces to multiple backends (with custom configuration)

  • Use custom OTEL collectors and processors

All telemetry data follows OpenTelemetry semantic conventions for AI/ML workloads.

Migration guides

From LangSmith

From manual tracing

If you've built custom tracing, migration is straightforward:

API reference

Full SDK documentation:

Next steps

Now that your application is instrumented:

  1. Explore the data: Check your Fiddler dashboard for traces, metrics, and performance insights

  2. Learn advanced features: See our Advanced Usage Guidearrow-up-right for complex multi-agent scenarios

  3. Review the SDK reference: Check the Fiddler LangGraph SDK Reference for complete documentation

  4. Optimize for production: Review configuration options for high-volume applications

Last updated

Was this helpful?