Agentic AI Overview
Native SDKs and framework integrations for agentic AI and LLM applications
Monitor and evaluate your agentic AI applications with Fiddler's native SDKs and framework integrations. From auto-instrumented LangGraph agents to Strands agent applications, Fiddler provides comprehensive observability for the next generation of AI systems.
Why Agentic Observability Matters
Agentic AI systemsβautonomous agents that reason, plan, and coordinateβintroduce exponential complexity compared to traditional AI applications:
26x more monitoring resources required than single-agent systems
Non-deterministic behavior makes traditional debugging approaches inadequate
Multi-step workflows require hierarchical tracing across agents, tools, and LLM calls
Cascading failures demand root cause analysis across distributed agent architectures
Fiddler's agentic observability provides visibility into every stage of the agent lifecycle: Thought β Action β Execution β Reflection β Alignment.
Native SDKs
Fiddler-built and maintained instrumentation libraries for production-grade agentic monitoring.
Fiddler LangGraph SDK
Auto-instrument LangGraph applications with OpenTelemetry-based tracing.
Best for: LangChain LangGraph agent applications with complex multi-agent workflows
Key Features:
Automatic span creation for agent steps, tool calls, and LLM requests
Hierarchical tracing across Application β Session β Agent β Span levels
Zero-configuration setup with one environment variable
Full context preservation for debugging non-deterministic behavior
Status: β GA - Production-ready
Get Started with LangGraph SDK β
Fiddler Strands SDK
Native integration for Strands Agent applications.
Best for: Teams building agents with the Strands framework
Key Features:
Purpose-built for Strands agent architecture
Seamless integration with Strands agent runtime
Multi-agent coordination tracking
Platform-agnostic deployment (works on AWS, custom infrastructure, etc.)
Status: β GA - Production-ready
Get Started with Strands SDK β
Fiddler Evals SDK
LLM evaluation framework with pre-built evaluators and custom eval support.
Best for: Offline evaluation of LLM applications and agentic workflows
Key Features:
14+ pre-built evaluators (faithfulness, toxicity, PII, coherence, etc.)
Custom evaluator framework for domain-specific metrics
Batch evaluation for datasets
Integration with Fiddler platform for tracking and comparison
Status: β GA - Production-ready
Get Started with Evals SDK β
Platform SDKs
Core API access for building custom integrations and monitoring workflows.
Python Client SDK
Comprehensive Python client for all Fiddler platform capabilities.
Best for: Custom integrations, ML model monitoring, programmatic access to Fiddler features
Key Features:
Full API coverage for ML and LLM monitoring
Dataset uploads, model publishing, event ingestion
Alert configuration, dashboard management
Custom metrics and enrichments
Status: β GA - Production-ready
Python Client Documentation β
REST API
Complete HTTP API for language-agnostic platform access.
Best for: Non-Python environments, webhook integrations, custom tooling
Status: β GA - Production-ready
Advanced Integrations
OpenTelemetry Integration
Direct OTLP integration for custom agent frameworks and multi-framework environments.
Best for: Multi-framework environments, custom agentic frameworks, advanced users requiring full instrumentation control
Key Features:
Vendor-neutral telemetry using OpenTelemetry standards
Manual span creation for complete control over instrumentation
Multi-framework support for custom and emerging agent frameworks
Compatible with existing OpenTelemetry infrastructure
Attribute mapping to Fiddler semantic conventions
Status: β GA - Production-ready
Get Started with OpenTelemetry β
Framework Support
While Fiddler provides native SDKs for LangGraph and Strands, agentic applications can be monitored regardless of framework:
Supported Frameworks & Tools
AI Agent Frameworks:
LangGraph - Native SDK with auto-instrumentation β
LangChain - Compatible via LangGraph SDK or Python Client
Other agentic frameworks - Monitorable via OpenTelemetry integration
LLM Provider SDKs:
OpenAI SDK - Track via Python Client or custom instrumentation
Anthropic SDK - Monitor Claude API calls via Python Client
Strands Agents - Native Strands SDK β
Observability Standards:
OpenTelemetry - Full OTLP support for custom instrumentation
Custom Tracing - Python Client API for framework-agnostic monitoring
Integration Selector
Not sure which SDK to use? Here's a quick decision guide:
LangGraph agent application
LangGraph SDK
Auto-instrumentation, zero config, hierarchical tracing
Strands Agents
Strands SDK
Purpose-built for Strands framework
LLM evaluation workflows
Evals SDK
Pre-built evaluators, batch processing, tracking
Custom agentic framework
Standards-based tracing, manual control, multi-framework
Traditional ML monitoring
Python Client
ML-specific features, drift detection, explainability
Getting Started
Quick Start Paths
LangGraph Applications
pip install fiddler-langgraph # Set environment variable export FIDDLER_API_KEY=your_key # Your LangGraph app is now instrumentedStrands Agents
pip install fiddler-strands # Configure for your Strands AgentCustom Agent Frameworks (OpenTelemetry)
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http # Configure OTLP endpoint and instrument your agent
What's Next?
Agentic Observability Concepts - Understand the agent lifecycle and monitoring approach
Agentic Monitoring Quick Start - Complete setup guide
Trust Service Overview - Learn about the evaluation platform powering Fiddler
Need Help?
Integration issues? See our troubleshooting guide or contact support
Feature request? Request an integration for a framework we don't yet support
Questions? Join our community Slack or check our FAQ
Last updated
Was this helpful?