LogoLogo
👨‍💻 API Reference📣 Release Notes📺 Request a Demo
  • Introduction to Fiddler
    • Monitor, Analyze, and Protect your ML Models and Gen AI Applications
  • Fiddler Doc Chatbot
  • First Steps
    • Getting Started With Fiddler Guardrails
    • Getting Started with LLM Monitoring
    • Getting Started with ML Model Observability
  • Tutorials & Quick Starts
    • LLM and GenAI
      • LLM Evaluation - Compare Outputs
      • LLM Monitoring - Simple
    • Fiddler Free Guardrails
      • Guardrails - Quick Start Guide
      • Guardrails - Faithfulness
      • Guardrails - Safety
      • Guardrails FAQ
    • ML Observability
      • ML Monitoring - Simple
      • ML Monitoring - NLP Inputs
      • ML Monitoring - Class Imbalance
      • ML Monitoring - Model Versions
      • ML Monitoring - Ranking
      • ML Monitoring - Regression
      • ML Monitoring - Feature Impact
      • ML Monitoring - CV Inputs
  • Glossary
    • Product Concepts
      • Baseline
      • Custom Metric
      • Data Drift
      • Embedding Visualization
      • Fiddler Guardrails
      • Fiddler Trust Service
      • LLM and GenAI Observability
      • Metric
      • Model Drift
      • Model Performance
      • ML Observability
      • Trust Score
  • Product Guide
    • LLM Application Monitoring & Protection
      • LLM-Based Metrics
      • Embedding Visualizations for LLM Monitoring and Analysis
      • Selecting Enrichments
      • Enrichments (Private Preview)
      • Guardrails for Proactive Application Protection
    • Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring
      • Alerts
      • Package-Based Alerts (Private Preview)
      • Class Imbalanced Data
      • Enhance ML and LLM Insights with Custom Metrics
      • Data Drift: Monitor Model Performance Changes with Fiddler's Insights
      • Ensuring Data Integrity in ML Models And LLMs
      • Embedding Visualization With UMAP
      • Fiddler Query Language
      • Model Versions
      • How to Effectively Use the Monitoring Chart UI
      • Performance Tracking
      • Model Segments: Analyze Cohorts for Performance Insights and Bias Detection
      • Statistics
      • Monitoring ML Model and LLM Traffic
      • Vector Monitoring
    • Enhance Model Insights with Fiddler's Slice and Explain
      • Events Table in RCA
      • Feature Analytics Creation
      • Metric Card Creation
      • Performance Charts Creation
      • Performance Charts Visualization
    • Master AI Monitoring: Create, Customize, and Compare Dashboards
      • Creating Dashboards
      • Dashboard Interactions
      • Dashboard Utilities
    • Adding and Editing Models in the UI
      • Model Editor UI
      • Model Schema Editing Guide
    • Fairness
    • Explainability
      • Model: Artifacts, Package, Surrogate
      • Global Explainability: Visualize Feature Impact and Importance in Fiddler
      • Point Explainability
      • Flexible Model Deployment
        • On Prem Manual Flexible Model Deployment XAI
  • Technical Reference
    • Python Client API Reference
    • Python Client Guides
      • Installation and Setup
      • Model Onboarding
        • Create a Project and Onboard a Model for Observation
        • Model Task Types
        • Customizing your Model Schema
        • Specifying Custom Missing Value Representations
      • Publishing Inference Data
        • Creating a Baseline Dataset
        • Publishing Batches Of Events
        • Publishing Ranking Events
        • Streaming Live Events
        • Updating Already Published Events
        • Deleting Events From Fiddler
      • Creating and Managing Alerts
      • Explainability Examples
        • Adding a Surrogate Model
        • Uploading Model Artifacts
        • Updating Model Artifacts
        • ML Framework Examples
          • Scikit Learn
          • Tensorflow HDF5
          • Tensorflow Savedmodel
          • Xgboost
        • Model Task Examples
          • Binary Classification
          • Multiclass Classification
          • Regression
          • Uploading A Ranking Model Artifact
    • Integrations
      • Data Pipeline Integrations
        • Airflow Integration
        • BigQuery Integration
        • Integration With S3
        • Kafka Integration
        • Sagemaker Integration
        • Snowflake Integration
      • ML Platform Integrations
        • Integrate Fiddler with Databricks for Model Monitoring and Explainability
        • Datadog Integration
        • ML Flow Integration
      • Alerting Integrations
        • PagerDuty Integration
    • Comprehensive REST API Reference
      • Projects REST API Guide
      • Model REST API Guide
      • File Upload REST API Guide
      • Custom Metrics REST API Guide
      • Segments REST API Guide
      • Baselines REST API Guide
      • Jobs REST API Guide
      • Alert Rules REST API Guide
      • Environments REST API Guide
      • Explainability REST API Guide
      • Server Info REST API Guide
      • Events REST API Guide
      • Fiddler Trust Service REST API Guide
    • Fiddler Free Guardrails Documentation
  • Configuration Guide
    • Authentication & Authorization
      • Adding Users
      • Overview of Role-Based Access Control
      • Email Authentication
      • Okta Integration
      • SSO with Azure AD
      • Ping Identity SAML SSO Integration
      • Mapping LDAP Groups & Users to Fiddler Teams
    • Application Settings
    • Supported Browsers
  • History
    • Release Notes
    • Python Client History
    • Compatibility Matrix
    • Product Maturity Definitions
Powered by GitBook
On this page
  • Monitor Model Performance Changes with Fiddler's Insights
  • Determining Drift Root Cause
  • Why Track Data Drift?

Was this helpful?

  1. Product Guide
  2. Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring

Data Drift: Monitor Model Performance Changes with Fiddler's Insights

PreviousEnhance ML and LLM Insights with Custom MetricsNextEnsuring Data Integrity in ML Models And LLMs

Last updated 7 days ago

Was this helpful?

© 2024 Fiddler Labs, Inc.

Monitor Model Performance Changes with Fiddler's Insights

Monitoring Data Drift in ML Models

Model performance can be poor if models trained on a specific dataset encounter different data in production. This is called data drift and it is a metric which is available on model inputs, outputs, and custom features. On the Insights dashboard for your model, Fiddler gives you a diverse set of visuals to explore different metrics.

Leverage the data drift chart to identify what data is drifting, when it’s drifting, and how it’s drifting. This is the first step in identifying possible model performance issues.

Drift Metrics Details

Fiddler supports the following:

  • Drift Metrics

    • Jensen–Shannon distance (JSD)

      • A distance metric calculated between the distribution of a field in the baseline and that same distribution for the time period of interest.

      • For more information on JSD, click here.

    • Population Stability Index (PSI)

      • A drift metric based on the multinomial classification of a variable into bins or categories. The differences in each bin between the baseline and the time period of interest are then utilized to calculate it as follows:

        🚧 Note

        There is a possibility that PSI can shoot to infinity. To avoid this, PSI calculation in Fiddler is done such that each bin count is incremented with a base_count=1. Thus, there might be a slight difference in the PSI values obtained from manual calculations.

  • Average Values – The mean of a field (feature or prediction) over time. This can be thought of as an intuitive drift score.

  • Drift Analytics – You can drill down into the features responsible for the prediction drift using the table at the bottom.

    • Feature Impact: The contribution of a feature to the model’s predictions, averaged over the baseline. The contribution is calculated using random ablation feature impact.

    • Feature Drift: Drift of the feature, calculated using the drift metric of choice.

    • Prediction Drift Impact: A heuristic calculated using the product of the feature impact and the feature drift. The higher the score, the more this feature is likely to have contributed to the prediction drift.

Determining Drift Root Cause

In the Root Cause Analysis table of your drift charts, you can select a feature to see the feature distribution for both the time period under consideration and the baseline dataset.

Why Track Data Drift?

  • Data drift is a great proxy metric for performance decline, especially if there is delay in getting labels for production events. (e.g. In a credit lending use case, an actual default may happen after months or years.)

  • Monitoring data drift also helps you stay informed about distributional shifts in the data for features of interest, which could have business implications even if there is no decline in model performance.

Taking Action on Observed Data Drift

  • High drift can occur as a result of data integrity issues (bugs in the data pipeline), or as a result of an actual change in the distribution of data due to external factors (e.g. a dip in income due to COVID). The former is more in our control to solve directly. The latter may not be solvable directly, but can serve as an indicator that further investigation (and possible retraining) may be needed.

  • You can drill down deeper into the data by examining it in the Analyze tab.


Questions? to a product expert or a demo.

Need help? Contact us at .

❓
💡
Talk
request
help@fiddler.ai
Fiddler monitoring dashboard
Fiddler drift charts
Feature drift chart with feature distribution comparison between baseline and production data