LogoLogo
👨‍💻 API Reference📣 Release Notes📺 Request a Demo
  • Introduction to Fiddler
    • Monitor, Analyze, and Protect your ML Models and Gen AI Applications
  • Fiddler Doc Chatbot
  • First Steps
    • Getting Started With Fiddler Guardrails
    • Getting Started with LLM Monitoring
    • Getting Started with ML Model Observability
  • Tutorials & Quick Starts
    • LLM and GenAI
      • LLM Evaluation - Compare Outputs
      • LLM Monitoring - Simple
    • Fiddler Free Guardrails
      • Guardrails - Quick Start Guide
      • Guardrails - Faithfulness
      • Guardrails - Safety
      • Guardrails FAQ
    • ML Observability
      • ML Monitoring - Simple
      • ML Monitoring - NLP Inputs
      • ML Monitoring - Class Imbalance
      • ML Monitoring - Model Versions
      • ML Monitoring - Ranking
      • ML Monitoring - Regression
      • ML Monitoring - Feature Impact
      • ML Monitoring - CV Inputs
  • Glossary
    • Product Concepts
      • Baseline
      • Custom Metric
      • Data Drift
      • Embedding Visualization
      • Fiddler Guardrails
      • Fiddler Trust Service
      • LLM and GenAI Observability
      • Metric
      • Model Drift
      • Model Performance
      • ML Observability
      • Trust Score
  • Product Guide
    • LLM Application Monitoring & Protection
      • LLM-Based Metrics
      • Embedding Visualizations for LLM Monitoring and Analysis
      • Selecting Enrichments
      • Enrichments (Private Preview)
      • Guardrails for Proactive Application Protection
    • Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring
      • Alerts
      • Package-Based Alerts (Private Preview)
      • Class Imbalanced Data
      • Enhance ML and LLM Insights with Custom Metrics
      • Data Drift: Monitor Model Performance Changes with Fiddler's Insights
      • Ensuring Data Integrity in ML Models And LLMs
      • Embedding Visualization With UMAP
      • Fiddler Query Language
      • Model Versions
      • How to Effectively Use the Monitoring Chart UI
      • Performance Tracking
      • Model Segments: Analyze Cohorts for Performance Insights and Bias Detection
      • Statistics
      • Monitoring ML Model and LLM Traffic
      • Vector Monitoring
    • Enhance Model Insights with Fiddler's Slice and Explain
      • Events Table in RCA
      • Feature Analytics Creation
      • Metric Card Creation
      • Performance Charts Creation
      • Performance Charts Visualization
    • Master AI Monitoring: Create, Customize, and Compare Dashboards
      • Creating Dashboards
      • Dashboard Interactions
      • Dashboard Utilities
    • Adding and Editing Models in the UI
      • Model Editor UI
      • Model Schema Editing Guide
    • Fairness
    • Explainability
      • Model: Artifacts, Package, Surrogate
      • Global Explainability: Visualize Feature Impact and Importance in Fiddler
      • Point Explainability
      • Flexible Model Deployment
        • On Prem Manual Flexible Model Deployment XAI
  • Technical Reference
    • Python Client API Reference
    • Python Client Guides
      • Installation and Setup
      • Model Onboarding
        • Create a Project and Onboard a Model for Observation
        • Model Task Types
        • Customizing your Model Schema
        • Specifying Custom Missing Value Representations
      • Publishing Inference Data
        • Creating a Baseline Dataset
        • Publishing Batches Of Events
        • Publishing Ranking Events
        • Streaming Live Events
        • Updating Already Published Events
        • Deleting Events From Fiddler
      • Creating and Managing Alerts
      • Explainability Examples
        • Adding a Surrogate Model
        • Uploading Model Artifacts
        • Updating Model Artifacts
        • ML Framework Examples
          • Scikit Learn
          • Tensorflow HDF5
          • Tensorflow Savedmodel
          • Xgboost
        • Model Task Examples
          • Binary Classification
          • Multiclass Classification
          • Regression
          • Uploading A Ranking Model Artifact
    • Integrations
      • Data Pipeline Integrations
        • Airflow Integration
        • BigQuery Integration
        • Integration With S3
        • Kafka Integration
        • Sagemaker Integration
        • Snowflake Integration
      • ML Platform Integrations
        • Integrate Fiddler with Databricks for Model Monitoring and Explainability
        • Datadog Integration
        • ML Flow Integration
      • Alerting Integrations
        • PagerDuty Integration
    • Comprehensive REST API Reference
      • Projects REST API Guide
      • Model REST API Guide
      • File Upload REST API Guide
      • Custom Metrics REST API Guide
      • Segments REST API Guide
      • Baselines REST API Guide
      • Jobs REST API Guide
      • Alert Rules REST API Guide
      • Environments REST API Guide
      • Explainability REST API Guide
      • Server Info REST API Guide
      • Events REST API Guide
      • Fiddler Trust Service REST API Guide
    • Fiddler Free Guardrails Documentation
  • Configuration Guide
    • Authentication & Authorization
      • Adding Users
      • Overview of Role-Based Access Control
      • Email Authentication
      • Okta OIDC SSO Integration
      • Azure AD OIDC SSO Integration
      • Ping Identity SAML SSO Integration
      • Mapping LDAP Groups & Users to Fiddler Teams
    • Application Settings
    • Supported Browsers
  • History
    • Release Notes
    • Python Client History
    • Compatibility Matrix
    • Product Maturity Definitions
Powered by GitBook

© 2024 Fiddler Labs, Inc.

On this page
  • Monitoring and Observability Concepts
  • ML Observability
  • LLM Observability
  • Alerts
  • Metrics
  • Data Management Concepts
  • Pre-production Data
  • Production Data
  • Baselines
  • Segments (Cohorts)
  • Trust Scores (Enrichments)
  • Platform Components and Features
  • Fiddler Trust Service
  • Fiddler Guardrails
  • Embedding Visualizations
  • Dashboards and Charts
  • Bookmarks
  • Administration Concepts
  • Projects
  • Role-Based Access Control
  • Teams

Was this helpful?

  1. Glossary

Product Concepts

PreviousML Monitoring - CV InputsNextBaseline

Last updated 6 days ago

Was this helpful?

This page explains core concepts and terminology used throughout Fiddler's AI Observability and Security platform. Understanding these concepts will help you navigate the platform more effectively and get the most from Fiddler's capabilities.

Monitoring and Observability Concepts

ML Observability

is the practice of gaining comprehensive insights into AI application performance throughout its lifecycle. It goes beyond simple indicators of good and bad performance by empowering stakeholders to understand why a model behaves in a certain manner and how to enhance its performance. ML Observability begins with monitoring and alerting on performance issues but extends to guiding model owners toward the underlying root causes.

LLM Observability

is the specialized practice of evaluating, monitoring, analyzing, and improving Generative AI and LLM-based applications across their lifecycle. Fiddler provides real-time monitoring on safety metrics like toxicity, bias, and PII exposure, as well as correctness metrics like hallucinations, faithfulness, and relevancy specific to language models.

Alerts

are rules that trigger when production data meets defined conditions. These rules can be user-defined or automatically generated based on user configuration. Alert notifications can be sent via email, Slack, PagerDuty, or any combination thereof, enabling teams to respond quickly to potential issues with model performance or data quality.

Metrics

Metrics in Fiddler refer to the quantitative measurements and calculations the platform performs on inference data. These metrics provide insights into model behavior, data characteristics, and performance over time. Fiddler offers several core metric types:

  • : Measures statistical differences between production and baseline data distributions

  • : Tracks model accuracy, precision, recall, and other performance indicators

  • : Identifies missing values, outliers, and other data quality issues

  • : Monitors request volumes, response times, and utilization patterns

  • : Provides basic descriptive statistics about data distributions

  • : User-defined calculations tailored to specific business needs

Data Management Concepts

Pre-production Data

Data designated as pre-production contains non-time series data, which is uploaded to Fiddler in a single batch. Pre-production data typically includes training datasets, validation datasets, or other static data meant to be evaluated as a complete unit without the dimension of trends over time.

Production Data

Data designated as production contains time series data such as inference logs generated by models making decisions in live environments. This time series data provides the inputs and outputs of each model inference/decision, which Fiddler analyzes and compares against pre-production data to determine if model performance is degrading over time.

Baselines

Segments (Cohorts)

Trust Scores (Enrichments)

Platform Components and Features

Fiddler Trust Service

  • Evaluate LLM outputs with significantly higher efficiency than general-purpose LLMs

  • Maintain comparable quality in their assessments

  • Support both observability features and real-time protection capabilities

Fiddler Guardrails

Embedding Visualizations

Dashboards and Charts

Dashboards consolidate visualizations in one place, offering a detailed overview of model performance and an entry point for deeper analysis and root cause identification.

Bookmarks

Bookmarking enables quick access to frequently used projects, models, charts, and dashboards. The comprehensive bookmark page enhances navigation efficiency within the Fiddler platform, allowing users to quickly return to their most important resources.

Administration Concepts

Projects

Projects provide several key benefits:

  • Organizational Structure: Group related models and assets by business function, team ownership, or application purpose

  • Access Control: Define which users and teams can view or modify project resources through role-based permissions

  • Resource Isolation: Maintain separate environments for different AI initiatives to prevent configuration conflicts

  • Focused Monitoring: Create dashboards and alerts specific to the business context of each application

  • Collaborative Workflow: Enable teams to work together on related models within a consistent environment

Within a project, you can onboard multiple models, upload baseline datasets, create production data-based baselines, configure alerts, build dashboards, and analyze performance—all within a unified context that reflects your organization's structure and workflows.

Projects help scale AI governance across your organization by providing clear boundaries between different applications while maintaining consistent monitoring and explainability practices.

Role-Based Access Control

  • Org Admin: Manages users, teams, projects, and organization settings

  • Org Member: Has limited access to organization settings and cannot create projects

  • Project Admin: Manages all aspects of a project, including models, settings, and alerts

  • Project Writer: Can view and edit most project details, but has limited administrative capabilities

  • Project Viewer: Can view project resources, but cannot make changes

Teams

are reference datasets used for calculating data drift and other comparative metrics. When determining if drift has occurred, Fiddler compares the distribution of current production data against this reference data. Most commonly, training data establishes a model's baseline, but multiple baselines can be defined for a model, including static sets of historical inferences or rolling baselines that look back over specific time periods.

, also called Cohorts, are subsets of inference logs defined by custom filters. Segments allow users to analyze metrics for specific subsets of data (for example, "transactions under $1000" or "users from a specific region"). Segmentation enables more granular analysis of model performance across different data populations.

, also known as Enrichments, are specialized metrics that assess various quality and safety dimensions of LLM outputs. Generated by Fiddler's Trust Models, these scores evaluate dimensions such as safety, toxicity, hallucination, relevance, and coherence. They provide quantifiable measurements for monitoring LLM behavior and can trigger alerts or actions when outputs fall below quality thresholds.

The hosts specialized large language models (LLMs) called Fiddler Trust Models that are purpose-built for AI monitoring and guardrail applications. These models:

is a real-time content safety solution that evaluates and filters potentially harmful outputs from large language models before they reach end users. Built on Fiddler's Trust Service infrastructure, Guardrails detects problematic content across multiple safety dimensions and can either filter out unsafe content or provide detailed explanations of policy violations.

in Fiddler display high-dimensional embedding vectors in an accessible two-dimensional space using techniques like UMAP (Uniform Manifold Approximation and Projection). These visualizations make complex vector relationships visible, allowing users to identify clusters, outliers, and patterns that would remain hidden in raw numerical data.

Fiddler uses customizable for monitoring and sharing model behavior. Dashboards comprise Charts that provide distinct visualization types:

: Track metrics over time and compare model performance

: Display semantic relationships in embedding space

: Analyze model performance across different segments

in Fiddler serve as the principal organizational containers for your AI applications or use cases. Each project functions as a logical workspace that encapsulates related models, datasets, baselines, monitoring configurations, and analytics.

Fiddler supports (RBAC), which defines who can access which resources within the platform. Available roles include:

are groups of users within your organization that can be assigned specific roles and permissions for different projects. Each user can be a member of multiple teams, enabling flexible access control based on organizational structure and responsibilities.

ML observability
LLM observability
Alerts
Data Drift
Performance
Data Integrity
Traffic
Statistical
Custom Metrics
Baselines
Segments
Trust Scores
Fiddler Trust Service
Fiddler Guardrails
Embedding Visualizations
Dashboards
Monitoring Charts
Embedding Visualizations
Performance Analytics
Projects
Role-Based Access Control
Teams