ML Platforms Overview

Integrate Fiddler with MLOps platforms, experiment tracking tools, and ML frameworks

Integrate Fiddler into your MLOps workflow to monitor models across the entire machine learning lifecycle. From experiment tracking to production deployment, Fiddler works with the ML platforms you already use.

Why ML Platform Integrations Matter

Modern ML teams use sophisticated platforms for experimentation, training, and deployment. Fiddler's integrations ensure you can:

  • Unified Model Governance - Track models from experiment to production in one platform

  • Automated Monitoring Setup - Auto-configure monitoring when models are registered

  • Seamless Workflow Integration - Add observability without changing existing processes

  • Bi-Directional Sync - Share metrics between Fiddler and your ML platform

  • Experiment Comparison - Compare production performance against training experiments

MLOps Platform Integrations

Databricks

Integrate Fiddler with Databricks for unified ML development and monitoring.

Why Databricks + Fiddler:

  • Lakehouse Architecture - Monitor models trained on Delta Lake data

  • MLflow Integration - Automatic sync of registered models to Fiddler

  • Notebook Integration - Use Fiddler SDK directly in Databricks notebooks

  • Production Monitoring - Monitor models served via Databricks Model Serving

Key Features:

  • Automatic Model Registration - Models registered in Databricks MLflow automatically appear in Fiddler

  • Feature Store Integration - Monitor drift using Databricks Feature Store definitions

  • Collaborative Debugging - Share Fiddler insights in Databricks notebooks

  • Unified Data Access - Use Delta Lake as data source for baselines and production data

Get Started with Databricks β†’

Quick Start:

MLflow

Connect Fiddler to MLflow for experiment tracking and model registry integration.

Why MLflow + Fiddler:

  • Open-Source Standard - Works with any MLflow deployment (Databricks, AWS, GCP, self-hosted)

  • Model Registry Sync - Automatically monitor models when they transition to "Production"

  • Experiment Tracking - Compare production metrics with training experiment metrics

  • Model Versioning - Track performance across model versions

Key Features:

  • Automatic Model Onboarding - Models in MLflow registry auto-configure in Fiddler

  • Metric Synchronization - Export Fiddler metrics back to MLflow for unified view

  • Artifact Integration - Link model artifacts between MLflow and Fiddler

  • Stage-Based Monitoring - Different monitoring configs for Staging vs Production

Get Started with MLflow β†’

Quick Start:

Experiment Tracking & Model Registry

Unified Model Lifecycle

Track models from experimentation through production:

Integration Benefits:

  • Single Source of Truth - MLflow registry as canonical model inventory

  • Automated Workflows - Monitoring setup triggered by model registration

  • Version Comparison - Compare production metrics across model versions

  • Rollback Readiness - Quick rollback with historical performance data

Experiment-to-Production Comparison

Compare production model performance against training experiments:

ML Framework Support

While Fiddler is framework-agnostic, we provide enhanced support for popular ML frameworks:

Supported ML Frameworks

Classical ML:

  • Scikit-Learn - Full support for all estimators

  • XGBoost - Native explainability for tree models

  • LightGBM - Fast SHAP explanations

  • CatBoost - Categorical feature support

Deep Learning:

  • TensorFlow/Keras - Model analysis and monitoring

  • PyTorch - Dynamic graph model support

  • JAX - High-performance model monitoring

  • ONNX - Framework-agnostic model format

AutoML:

  • H2O.ai - AutoML model monitoring

  • AutoGluon - Tabular model support

  • TPOT - Pipeline optimization monitoring

Framework-Specific Features

Tree-Based Models (XGBoost, LightGBM, CatBoost):

  • Fast SHAP explanations using native implementations

  • Feature importance tracking over time

  • Tree structure analysis for debugging

Deep Learning (TensorFlow, PyTorch):

  • Layer-wise activation monitoring

  • Embedding drift detection

  • Custom metric support for complex architectures

Example - XGBoost Monitoring:

Integration Architecture Patterns

Pattern 1: MLflow-Centric Workflow

Use MLflow as the central hub for all ML operations:

Configuration:

Pattern 2: Databricks Unity Catalog Integration

Leverage Databricks Unity Catalog for governance and Fiddler for monitoring:

Configuration:

Pattern 3: Multi-Platform Model Tracking

Monitor models across multiple ML platforms:

Getting Started

Prerequisites

  • Fiddler Account - Cloud or on-premises deployment

  • ML Platform Access - Databricks workspace or MLflow server

  • Credentials - Fiddler access token + ML platform credentials

  • Network Connectivity - Firewall rules for integration

General Setup Steps

1. Configure ML Platform Connection

2. Sync Existing Models (Optional)

3. Enable Auto-Monitoring

Advanced Integration Features

Feature Store Integration

Monitor models using features from Databricks Feature Store:

Automated Retraining Triggers

Trigger retraining workflows when drift is detected:

Model Lineage Tracking

Track complete model lineage from data to deployment:

Integration Selector

Choose the right ML platform integration for your workflow:

Your ML Platform
Recommended Integration
Why

Databricks Lakehouse

Databricks integration

Native MLflow, Unity Catalog, Feature Store

Self-hosted MLflow

MLflow integration

Open-source, cloud-agnostic

AWS SageMaker

SageMaker Pipelines

AWS-native, Partner AI App compatible

Azure ML

MLflow integration

Azure ML uses MLflow under the hood

Vertex AI (GCP)

MLflow integration

Vertex AI supports MLflow

Multiple platforms

MLflow integration

Universal compatibility

Bi-Directional Metric Sync

Share metrics between Fiddler and your ML platform:

Export Fiddler Metrics to MLflow

Import MLflow Metrics to Fiddler

Security & Access Control

Authentication Methods

Databricks:

  • Personal Access Tokens (development)

  • Service Principal OAuth (production)

  • Azure AD Integration (enterprise)

MLflow:

  • HTTP Basic Authentication

  • Token-Based Authentication

  • Custom Auth Plugins

Permission Requirements

Databricks Permissions:

  • CAN_MANAGE on registered models

  • CAN_READ on Feature Store tables

  • CAN_USE on clusters (for SHAP computation)

MLflow Permissions:

  • Read access to Model Registry

  • Read access to Experiment Tracking

  • Write access for metric export (optional)

Monitoring MLOps Pipeline Health

Track Integration Health

Alerts for Sync Failures

Troubleshooting

Common Issues

Models Not Syncing:

  • Verify MLflow/Databricks credentials are valid

  • Check network connectivity from Fiddler to ML platform

  • Ensure models are in the correct stage (e.g., "Production")

  • Validate webhook endpoint is reachable (for event-driven sync)

Schema Mismatches:

  • Ensure feature names match between training and production

  • Verify data types are consistent

  • Check for missing features in production data

Performance Issues:

  • For large models, use SHAP sampling instead of full computation

  • Enable lazy loading for model artifacts

  • Use incremental sync for model registry (don't sync all historical versions)


Last updated

Was this helpful?