LangGraph SDK Advanced

What You'll Learn

This interactive notebook demonstrates advanced monitoring patterns for production LangGraph applications through a realistic travel planning system with multiple specialized agents.

Key Topics Covered:

  • Multi-agent workflow monitoring and orchestration

  • Conversation tracking across complex interactions

  • Production configuration for high-volume scenarios

  • Advanced error handling and recovery patterns

  • Business intelligence integration and analytics

Interactive Tutorial

The notebook walks through building a comprehensive travel planning application featuring hotel search, weather analysis, itinerary planning, and supervisor agents working together.

Open the Advanced Observability Notebook in Google Colab →

Or download the notebook directly from GitHub →

Production Configuration Best Practices

Before deploying LangGraph applications to production, configure the SDK for your specific workload characteristics.

High-Volume Applications

Optimize for applications processing thousands of traces per minute:

Low-Latency Requirements

Optimize for applications requiring sub-second trace export:

Memory-Constrained Environments

Configure conservative limits for edge deployments or containerized environments:

Development vs Production Configurations

Development Configuration:

Production Configuration:

Best Practices for Context and Conversation IDs

Structure your identifiers for maximum analytical value:

Prerequisites

  • Fiddler account with API credentials

  • OpenAI API key for example interactions

  • Basic familiarity with LangGraph concepts

Time Required

  • Complete tutorial: 45-60 minutes

  • Quick overview: 15-20 minutes

Telemetry Data Reference

Understanding the data captured by the Fiddler LangGraph SDK.

Span Attributes

The SDK automatically captures these OpenTelemetry attributes:

Attribute
Type
Description

gen_ai.agent.name

str

Name of the AI agent (auto-extracted from LangGraph, configurable for LangChain)

gen_ai.agent.id

str

Unique identifier (format: trace_id:agent_name)

gen_ai.conversation.id

str

Session identifier set via set_conversation_id()

fiddler.span.type

str

Span classification: chain, tool, llm, or other

gen_ai.llm.input.system

str

System prompt content

gen_ai.llm.input.user

str

User input/prompt

gen_ai.llm.output

str

Model response text

gen_ai.llm.context

str

Custom context set via set_llm_context()

gen_ai.llm.model

str

Model identifier (e.g., "gpt-4o-mini")

gen_ai.llm.token_count

int

Token usage metrics

gen_ai.tool.name

str

Tool function name

gen_ai.tool.input

str

Tool input parameters (JSON)

gen_ai.tool.output

str

Tool execution results (JSON)

duration_ms

float

Span duration in milliseconds

fiddler.error.message

str

Error message (if span failed)

fiddler.error.type

str

Error type classification

Querying and Filtering in Fiddler

Use these attributes in the Fiddler UI to:

  • Filter by agent: gen_ai.agent.name = "hotel_search_agent"

  • Find conversations: gen_ai.conversation.id = "user-123_support_2025-10-17..."

  • Analyze by model: gen_ai.llm.model = "gpt-4o"

  • Track errors: fiddler.error.type EXISTS

Who Should Use This

  • AI engineers building production LangGraph applications

  • DevOps teams monitoring agentic systems

  • Technical leaders evaluating observability strategies

Limitations and Considerations

Current Limitations

  • Framework Support: Only LangGraph is fully supported with automatic agent name extraction

    • LangChain applications require manual agent name configuration

    • Other frameworks must use the Client API directly

  • Protocol Support: Currently uses HTTP-based OTLP

    • gRPC support planned for future releases

  • Attribute Limits: Default OpenTelemetry limits apply

    • Configurable via span_limits parameter

    • Very large attribute values may be truncated

Performance Considerations

Overhead: Typical performance impact is < 5% with default settings

  • Use sampling to reduce overhead in high-volume scenarios

  • Adjust batch processing delays based on latency requirements

Memory: Span queue size affects the memory footprint

  • Default queue (100 spans) uses ~1-2MB

  • Increase OTEL_BSP_MAX_QUEUE_SIZE for high throughput

  • Decrease for memory-constrained environments

Network: Compression significantly reduces bandwidth usage

  • Gzip compression: ~70-80% reduction

  • Use Compression.NoCompression only for debugging

Production Deployment Checklist

Before deploying to production:

When to Tune Each Setting

Scenario
Configuration

High-volume production

Increase queue size, batch size, sampling rate

Low-latency requirements

Decrease schedule delay, smaller batches

Memory constraints

Decrease span limits, queue size, batch size

Development/debugging

Disable sampling, enable console tracer

Cost optimization

Increase sampling (lower %), enable compression

Next Steps

After completing the tutorial: