Getting Started with Agentic Monitoring
Modern GenAI applications built with LangGraph create complex, multi-step workflows that can be difficult to understand and debug. Fiddler, the pioneer in AI Observability and Security, provides the Fiddler LangGraph SDK to give you complete visibility into these AI agent behaviors. With enterprise-grade safeguards and real-time monitoring, you get the insights needed to confidently deploy reliable, high-performance GenAI applications in production.
Private Preview Notice The Fiddler LangGraph SDK is currently in private preview. This means:
API interfaces may change before general availability
Some features are still under active development
We welcome your feedback to help shape the final product
Please refer to our product maturity definitions for more details.
What Is Agentic Monitoring?
Agentic monitoring observes and analyzes AI agent behavior in real-time. Unlike traditional application monitoring that focuses on system metrics, agentic monitoring captures the unique characteristics of AI workflows:
Agent decision-making processes: How agents choose between different tools and actions
Multi-step reasoning chains: Complex workflows from initial prompt to final response
LLM interactions: Model inputs, outputs, and performance across different calls
Tool usage patterns: How agents utilize external functions and APIs
Error propagation: How failures cascade through agent workflows

Why Agentic Monitoring Matters
GenAI applications present unique observability challenges that traditional monitoring approaches can't address:
Complexity and Opacity
AI agents make autonomous decisions that are difficult to predict or understand. Without proper monitoring, you can't debug agent behavior in production or understand why an agent made specific choices.
Dynamic Workflows
Unlike traditional applications with fixed execution paths, AI agents create dynamic workflows based on context and available tools. You need to trace the actual execution path for each interaction.
Performance Variability
LLM response times and quality vary significantly based on model load, prompt complexity, and external factors. Monitoring helps you identify performance patterns and optimize accordingly.
Cost Management
GenAI applications consume tokens and compute resources with each LLM call. Understanding usage patterns helps you optimize costs and prevent unexpected billing spikes.
Quality Assurance
AI outputs vary in quality and accuracy. Monitoring helps you identify when agents produce suboptimal results and understand the conditions that lead to better performance.
How Fiddler LangGraph SDK Enables Agentic Monitoring
The Fiddler LangGraph SDK transforms your existing LangGraph and LangChain applications into fully observable systems with minimal code changes:
import os
from fiddler_langgraph import FiddlerClient
from fiddler_langgraph.tracing.instrumentation import LangGraphInstrumentor
# Initialize the FiddlerClient with environment variables (recommended)
fdl_client = FiddlerClient(
api_key=os.getenv("FIDDLER_API_KEY"), # Your access token
application_id=os.getenv("FIDDLER_APPLICATION_ID"), # UUID4 from Step 1
url=os.getenv("FIDDLER_URL") # https://your-instance.fiddler.ai
)
Built on OpenTelemetry Standards
The SDK leverages OpenTelemetry (OTel), the industry standard for observability, ensuring compatibility with existing monitoring infrastructure and future-proofing your investment.
Automatic Instrumentation
Once configured, the SDK automatically collects important telemetry data behind the scenes from your agent workflows without requiring changes to your existing code:
Distributed traces: Complete execution flow
Span attributes: Inputs, outputs, and metadata
Performance metrics: Timing and resource usage
Error tracking: Detailed context and stack traces
Near Real-Time Streaming
Telemetry data streams in near real-time to your Fiddler instance, enabling immediate visibility into agent behavior and rapid response to issues.
Architecture Overview
The Fiddler LangGraph SDK integrates seamlessly into your application architecture:
Key Components
Callback Handler: Intercepts LangGraph callbacks to capture execution events
Trace Exporter: Sends telemetry data to Fiddler using OTLP protocol
FiddlerClient: Manages configuration, authentication, and connection to Fiddler
OpenTelemetry Integration: Provides industry-standard distributed tracing
What Data Is Captured
The SDK automatically captures comprehensive data about your agent workflows:
Agent Execution Traces
Workflow structure: Complete hierarchy of agent steps and decisions
Timing information: Duration of each step and overall execution time
Agent identification: Unique identifiers for different agents in your system
LLM Interactions
Model configuration: Model name, temperature, and other parameters
Input prompts: System messages, user input, and conversation history
Model outputs: Generated responses, token usage, and completion metadata
Performance metrics: Response time, tokens consumed, and success rates
Tool and Function Calls
Tool identification: Names and types of tools used by agents
Input parameters: Arguments passed to functions and tools
Output results: Return values and success/failure status
Execution context: When and why tools were invoked
Error and Exception Handling
Exception details: Full error messages and stack traces
Context information: State of the agent when errors occurred
Recovery attempts: How agents handled and recovered from failures
Key Benefits
Agentic monitoring with the Fiddler LangGraph SDK provides immediate value across your development lifecycle, helping you move beyond experimentation to deploy production AI confidently:
Development and Learning
Quick setup: Start monitoring with just a few lines of code
Immediate insights: See agent behavior without complex configuration
Deep visibility: Understand decision-making processes beyond just inputs and outputs
Performance and Optimization
Hierarchical root cause analysis: Drill down from application-level issues to specific agent spans to reduce MTTI (Mean Time to Identify) and MTTR (Mean Time to Resolve)
Application-critical metrics: Monitor performance, costs, and safety through a unified dashboard
Quality improvement: Understand which patterns lead to better results
Production and Troubleshooting
Enterprise-grade monitoring: Track agent performance and success rates at Fortune 500 scale
End-to-end visibility: Complete observability into multi-agent interactions and coordination patterns
Actionable alerts: Get early warnings on performance issues and cross-agent problems
Data-driven decisions: Make informed optimizations based on comprehensive telemetry data
Security and Privacy Considerations
The Fiddler LangGraph SDK is designed with enterprise-grade security and privacy:
Enterprise compliance: SOC 2 Type 2 security and HIPAA compliance standards
Data encryption: All telemetry data is encrypted in transit using HTTPS/TLS
Access control: Role-based access control (RBAC) and SSO for enterprise user management
Personal access token: Access token-based authentication ensures only authorized access
Data control: You control what data is captured and sent to Fiddler
Deployment flexibility: Deploy in Fiddler cloud or your own cloud
Compliance: Built on industry-standard OpenTelemetry for compliance requirements
Ready to Get Started?
You're now ready to add enterprise-grade observability to your LangGraph applications:
Follow the Quick Start Guide: Get monitoring in under 10 minutes
Explore Advanced Features: Master production configurations
Get support: Contact us at [email protected] for assistance and feedback
Frequently Asked Questions
Q: How is this different from traditional APM tools?
A: Traditional APM focuses on system metrics. Agentic monitoring captures AI-specific behaviors, such as reasoning chains, tool selection, and LLM interactions.
Q: What's the performance overhead?
A: With default settings, expect less than 5% overhead. This can be reduced further with sampling.
Q: Can I use this with LangChain or other frameworks?
A: Currently, only LangGraph is supported. Other frameworks can use the Client API directly.
Q: Is my data secure when using the SDK?
A: Yes, all data is encrypted in transit using HTTPS/TLS, and you retain full control over what data is captured and sent to Fiddler. The SDK supports deployment in your own cloud environment for maximum security.
Q: How quickly will I see data in my Fiddler dashboard?
A: Telemetry data streams in near real-time, typically appearing in your dashboard within 1-2 minutes of agent execution.
Q: What happens if my agents fail? Will I still get monitoring data?
A: Yes, the SDK captures comprehensive error information, including exception details, agent state at failure, and recovery attempts, helping you debug and improve agent reliability.
Limitations and Considerations
As a private preview release, the Fiddler LangGraph SDK has some current limitations:
Framework support: Currently supports LangGraph; other frameworks require the Client API
Protocol support: Uses HTTP-based OTLP; gRPC support planned for future releases
Attribute limits: Default limits prevent oversized spans; configurable for high-volume use cases
Breaking changes: As a private preview release, future versions may include breaking changes
These limitations don't affect the core monitoring capabilities but are essential to consider for production planning.
Next Steps
Quick Start Guide: Get your first traces in under 10 minutes
Advanced Tutorial: Explore complex multi-agent scenarios and production configurations
Fiddler LangGraph SDK Reference: Complete technical documentation for all SDK components
Getting Started with LLM Monitoring: Broader context on LLM observability with Fiddler
❓ Questions? Talk to a product expert or request a demo.
💡 Need help? Contact us at [email protected].