LogoLogo
👨‍💻 API Reference📣 Release Notes📺 Request a Demo
  • Introduction to Fiddler
    • Monitor, Analyze, and Protect your ML Models and Gen AI Applications
  • Fiddler Doc Chatbot
  • First Steps
    • Getting Started With Fiddler Guardrails
    • Getting Started with LLM Monitoring
    • Getting Started with ML Model Observability
  • Tutorials & Quick Starts
    • LLM and GenAI
      • LLM Evaluation - Compare Outputs
      • LLM Monitoring - Simple
    • Fiddler Free Guardrails
      • Guardrails - Quick Start Guide
      • Guardrails - Faithfulness
      • Guardrails - Safety
      • Guardrails FAQ
    • ML Observability
      • ML Monitoring - Simple
      • ML Monitoring - NLP Inputs
      • ML Monitoring - Class Imbalance
      • ML Monitoring - Model Versions
      • ML Monitoring - Ranking
      • ML Monitoring - Regression
      • ML Monitoring - Feature Impact
      • ML Monitoring - CV Inputs
  • Glossary
    • Product Concepts
      • Baseline
      • Custom Metric
      • Data Drift
      • Embedding Visualization
      • Fiddler Guardrails
      • Fiddler Trust Service
      • LLM and GenAI Observability
      • Metric
      • Model Drift
      • Model Performance
      • ML Observability
      • Trust Score
  • Product Guide
    • LLM Application Monitoring & Protection
      • LLM-Based Metrics
      • Embedding Visualizations for LLM Monitoring and Analysis
      • Selecting Enrichments
      • Enrichments (Private Preview)
      • Guardrails for Proactive Application Protection
    • Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring
      • Alerts
      • Package-Based Alerts (Private Preview)
      • Class Imbalanced Data
      • Enhance ML and LLM Insights with Custom Metrics
      • Data Drift: Monitor Model Performance Changes with Fiddler's Insights
      • Ensuring Data Integrity in ML Models And LLMs
      • Embedding Visualization With UMAP
      • Fiddler Query Language
      • Model Versions
      • How to Effectively Use the Monitoring Chart UI
      • Performance Tracking
      • Model Segments: Analyze Cohorts for Performance Insights and Bias Detection
      • Statistics
      • Monitoring ML Model and LLM Traffic
      • Vector Monitoring
    • Enhance Model Insights with Fiddler's Slice and Explain
      • Events Table in RCA
      • Feature Analytics Creation
      • Metric Card Creation
      • Performance Charts Creation
      • Performance Charts Visualization
    • Master AI Monitoring: Create, Customize, and Compare Dashboards
      • Creating Dashboards
      • Dashboard Interactions
      • Dashboard Utilities
    • Adding and Editing Models in the UI
      • Model Editor UI
      • Model Schema Editing Guide
    • Fairness
    • Explainability
      • Model: Artifacts, Package, Surrogate
      • Global Explainability: Visualize Feature Impact and Importance in Fiddler
      • Point Explainability
      • Flexible Model Deployment
        • On Prem Manual Flexible Model Deployment XAI
  • Technical Reference
    • Python Client API Reference
    • Python Client Guides
      • Installation and Setup
      • Model Onboarding
        • Create a Project and Onboard a Model for Observation
        • Model Task Types
        • Customizing your Model Schema
        • Specifying Custom Missing Value Representations
      • Publishing Inference Data
        • Creating a Baseline Dataset
        • Publishing Batches Of Events
        • Publishing Ranking Events
        • Streaming Live Events
        • Updating Already Published Events
        • Deleting Events From Fiddler
      • Creating and Managing Alerts
      • Explainability Examples
        • Adding a Surrogate Model
        • Uploading Model Artifacts
        • Updating Model Artifacts
        • ML Framework Examples
          • Scikit Learn
          • Tensorflow HDF5
          • Tensorflow Savedmodel
          • Xgboost
        • Model Task Examples
          • Binary Classification
          • Multiclass Classification
          • Regression
          • Uploading A Ranking Model Artifact
    • Integrations
      • Data Pipeline Integrations
        • Airflow Integration
        • BigQuery Integration
        • Integration With S3
        • Kafka Integration
        • Sagemaker Integration
        • Snowflake Integration
      • ML Platform Integrations
        • Integrate Fiddler with Databricks for Model Monitoring and Explainability
        • Datadog Integration
        • ML Flow Integration
      • Alerting Integrations
        • PagerDuty Integration
    • Comprehensive REST API Reference
      • Projects REST API Guide
      • Model REST API Guide
      • File Upload REST API Guide
      • Custom Metrics REST API Guide
      • Segments REST API Guide
      • Baselines REST API Guide
      • Jobs REST API Guide
      • Alert Rules REST API Guide
      • Environments REST API Guide
      • Explainability REST API Guide
      • Server Info REST API Guide
      • Events REST API Guide
      • Fiddler Trust Service REST API Guide
    • Fiddler Free Guardrails Documentation
  • Configuration Guide
    • Authentication & Authorization
      • Adding Users
      • Overview of Role-Based Access Control
      • Email Authentication
      • Okta Integration
      • SSO with Azure AD
      • Ping Identity SAML SSO Integration
      • Mapping LDAP Groups & Users to Fiddler Teams
    • Application Settings
    • Supported Browsers
  • History
    • Release Notes
    • Python Client History
    • Compatibility Matrix
    • Product Maturity Definitions
Powered by GitBook

© 2024 Fiddler Labs, Inc.

On this page
  • Overview
  • What Is Being Tracked?
  • Why is it being tracked?
  • How does it work?

Was this helpful?

  1. Product Guide
  2. Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring

Ensuring Data Integrity in ML Models And LLMs

PreviousData Drift: Monitor Model Performance Changes with Fiddler's InsightsNextEmbedding Visualization With UMAP

Last updated 1 month ago

Was this helpful?

Overview

ML models are increasingly driven by complex feature pipelines and automated workflows that involve dynamic data. Data is transformed from source to model input which can result in data inconsistencies and errors.

There are three types of violations that can occur at model inference: missing values, type mismatches (e.g. sending a float input for a categorical feature type) or range violations (e.g. sending an unknown US State for a State categorical feature).

You can monitor all these violations with auto-generated Data Integrity charts and alerts, or create your own custom alerts and charts. The time series shown below tracks the violations of data integrity constraints set up for this model.

What Is Being Tracked?

The time series chart above tracks the violations of data integrity constraints set up for this model. Note that both raw count and percentage are available for data integrity metrics.

  • Any Violation Any Column — The count of any type of data integrity violation over all features for a given period of time.

  • % Any Violation Any Column — The percentage of any type of data integrity violation over all features for a given period of time.

  • NULL Count Any Column — The count of missing value violations over all features for a given period of time.

  • Range Violation Count Any Column — The count of range violations over all features for a given period of time.

  • Type Violation Count Any Column — The count of data type violations over all features for a given period of time.

Why is it being tracked?

Data integrity issues can cause incorrect data to flow into the model, leading to poor model performance and negatively impacting the business or end-user experience.

How does it work?

Setting up constraints for individual features when they number in the tens or hundreds can be tedious. To avoid this, the schema of a model is used as a reference to detect when features in the incoming production logs deviate from expected patterns established during model training. For example, feature values may be out of range (numerical inputs) or contain unknown values (categorical inputs). The minimums, maximums, and distinct categorical values for a model's features are collected during initial model onboarding and stored in the Fiddler model's ModelSchema.

Fiddler will automatically generate constraints based on the data distribution of the sample data used to generate the model schema during model onboarding.

  • Type mismatch: A data integrity violation will be triggered when the type of a feature value differs from what was specified for that feature in the model's schema.

  • Range mismatch:

    • For categorical features, a data integrity violation will be triggered when it sees any value other than the ones specified in the model's schema.

    • For continuous variables, the violation will be triggered if the values are outside the range specified in the model's schema.

For , a range mismatch will be triggered when a dimension mismatch occurs compared to the expected dimension from the model's schema.


Questions? to a product expert or a demo.

Need help? Contact us at .

❓
💡
Talk
request
help@fiddler.ai
Monitoring chart with missing values, type violations, and range violations
Monitoring chart with missing values, type violations, and range violations
vector datatype