LogoLogo
👨‍💻 API Reference📣 Release Notes📺 Request a Demo
  • Introduction to Fiddler
    • Monitor, Analyze, and Protect your ML Models and Gen AI Applications
  • Fiddler Doc Chatbot
  • First Steps
    • Getting Started With Fiddler Guardrails
    • Getting Started with LLM Monitoring
    • Getting Started with ML Model Observability
  • Tutorials & Quick Starts
    • LLM and GenAI
      • LLM Evaluation - Compare Outputs
      • LLM Monitoring - Simple
    • Fiddler Free Guardrails
      • Guardrails - Quick Start Guide
      • Guardrails - Faithfulness
      • Guardrails - Safety
      • Guardrails FAQ
    • ML Observability
      • ML Monitoring - Simple
      • ML Monitoring - NLP Inputs
      • ML Monitoring - Class Imbalance
      • ML Monitoring - Model Versions
      • ML Monitoring - Ranking
      • ML Monitoring - Regression
      • ML Monitoring - Feature Impact
      • ML Monitoring - CV Inputs
  • Glossary
    • Product Concepts
      • Baseline
      • Custom Metric
      • Data Drift
      • Embedding Visualization
      • Fiddler Guardrails
      • Fiddler Trust Service
      • LLM and GenAI Observability
      • Metric
      • Model Drift
      • Model Performance
      • ML Observability
      • Trust Score
  • Product Guide
    • LLM Application Monitoring & Protection
      • LLM-Based Metrics
      • Embedding Visualizations for LLM Monitoring and Analysis
      • Selecting Enrichments
      • Enrichments (Private Preview)
      • Guardrails for Proactive Application Protection
    • Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring
      • Alerts
      • Package-Based Alerts (Private Preview)
      • Class Imbalanced Data
      • Enhance ML and LLM Insights with Custom Metrics
      • Data Drift: Monitor Model Performance Changes with Fiddler's Insights
      • Ensuring Data Integrity in ML Models And LLMs
      • Embedding Visualization With UMAP
      • Fiddler Query Language
      • Model Versions
      • How to Effectively Use the Monitoring Chart UI
      • Performance Tracking
      • Model Segments: Analyze Cohorts for Performance Insights and Bias Detection
      • Statistics
      • Monitoring ML Model and LLM Traffic
      • Vector Monitoring
    • Enhance Model Insights with Fiddler's Slice and Explain
      • Events Table in RCA
      • Feature Analytics Creation
      • Metric Card Creation
      • Performance Charts Creation
      • Performance Charts Visualization
    • Master AI Monitoring: Create, Customize, and Compare Dashboards
      • Creating Dashboards
      • Dashboard Interactions
      • Dashboard Utilities
    • Adding and Editing Models in the UI
      • Model Editor UI
      • Model Schema Editing Guide
    • Fairness
    • Explainability
      • Model: Artifacts, Package, Surrogate
      • Global Explainability: Visualize Feature Impact and Importance in Fiddler
      • Point Explainability
      • Flexible Model Deployment
        • On Prem Manual Flexible Model Deployment XAI
  • Technical Reference
    • Python Client API Reference
    • Python Client Guides
      • Installation and Setup
      • Model Onboarding
        • Create a Project and Onboard a Model for Observation
        • Model Task Types
        • Customizing your Model Schema
        • Specifying Custom Missing Value Representations
      • Publishing Inference Data
        • Creating a Baseline Dataset
        • Publishing Batches Of Events
        • Publishing Ranking Events
        • Streaming Live Events
        • Updating Already Published Events
        • Deleting Events From Fiddler
      • Creating and Managing Alerts
      • Explainability Examples
        • Adding a Surrogate Model
        • Uploading Model Artifacts
        • Updating Model Artifacts
        • ML Framework Examples
          • Scikit Learn
          • Tensorflow HDF5
          • Tensorflow Savedmodel
          • Xgboost
        • Model Task Examples
          • Binary Classification
          • Multiclass Classification
          • Regression
          • Uploading A Ranking Model Artifact
    • Integrations
      • Data Pipeline Integrations
        • Airflow Integration
        • BigQuery Integration
        • Integration With S3
        • Kafka Integration
        • Sagemaker Integration
        • Snowflake Integration
      • ML Platform Integrations
        • Integrate Fiddler with Databricks for Model Monitoring and Explainability
        • Datadog Integration
        • ML Flow Integration
      • Alerting Integrations
        • PagerDuty Integration
    • Comprehensive REST API Reference
      • Projects REST API Guide
      • Model REST API Guide
      • File Upload REST API Guide
      • Custom Metrics REST API Guide
      • Segments REST API Guide
      • Baselines REST API Guide
      • Jobs REST API Guide
      • Alert Rules REST API Guide
      • Environments REST API Guide
      • Explainability REST API Guide
      • Server Info REST API Guide
      • Events REST API Guide
      • Fiddler Trust Service REST API Guide
    • Fiddler Free Guardrails Documentation
  • Configuration Guide
    • Authentication & Authorization
      • Adding Users
      • Overview of Role-Based Access Control
      • Email Authentication
      • Okta Integration
      • SSO with Azure AD
      • Ping Identity SAML SSO Integration
      • Mapping LDAP Groups & Users to Fiddler Teams
    • Application Settings
    • Supported Browsers
  • History
    • Release Notes
    • Python Client History
    • Compatibility Matrix
    • Product Maturity Definitions
Powered by GitBook

© 2024 Fiddler Labs, Inc.

On this page
  • Overview
  • What is being tracked?
  • Why is it being tracked?
  • What steps should I take based on this information?

Was this helpful?

  1. Product Guide
  2. Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring

Performance Tracking

PreviousHow to Effectively Use the Monitoring Chart UINextModel Segments: Analyze Cohorts for Performance Insights and Bias Detection

Last updated 3 months ago

Was this helpful?

Overview

The model performance tells us how well a model performs on its task. A poorly performing model can have significant business implications.

What is being tracked?

Performance metrics

Model Task Type
Metric
Description

Binary Classification

Accuracy

(TP + TN) / (TP + TN + FP + FN)

Binary Classification

True Positive Rate/Recall

TP / (TP + FN)

Binary Classification

False Positive Rate

FP / (FP + TN)

Binary Classification

Precision

TP / (TP + FP)

Binary Classification

F1 Score

2 * ( Precision * Recall ) / ( Precision + Recall )

Binary Classification

AUROC

Area Under the Receiver Operating Characteristic (ROC) curve, which plots the true positive rate against the false positive rate

Binary Classification

Binary Cross Entropy

Measures the difference between the predicted probability distribution and the true distribution

Binary Classification

Geometric Mean

Square Root of ( Precision * Recall )

Binary Classification

Calibrated Threshold

A threshold that balances precision and recall at a particular operating point

Binary Classification

Data Count

The number of events where target and output are both not NULL. This will be used as the denominator when calculating accuracy.

Binary Classification

Expected Calibration Error

Measures the difference between predicted probabilities and empirical probabilities

Multi Classification

Accuracy

(Number of correctly classified samples) / ( Data Count ). Data Count refers to the number of events where the target and output are both not NULL

Multi Classification

Log Loss

Measures the difference between the predicted probability distribution and the true distribution, in a logarithmic scale

Regression

Coefficient of determination (R-squared)

Measures the proportion of variance in the dependent variable that is explained by the independent variables

Regression

Mean Squared Error (MSE)

Average of the squared differences between the predicted and true values

Regression

Mean Absolute Error (MAE)

Average of the absolute differences between the predicted and true values

Regression

Mean Absolute Percentage Error (MAPE)

Average of the absolute percentage differences between the predicted and true values

Regression

Weighted Mean Absolute Percentage Error (WMAPE)

The weighted average of the absolute percentage differences between the predicted and true values

Ranking

Mean Average Precision (MAP)—for binary relevance ranking only

Measures the average precision of the relevant items in the top-k results

Ranking

Normalized Discounted Cumulative Gain (NDCG)

Measures the quality of the ranking of the retrieved items, by discounting the relevance scores of items at lower ranks

Why is it being tracked?

  • Model performance tells us how well a model is doing on its task. A poorly performing model can have significant business implications.

  • The volume of decisions made on the basis of the predictions give visibility into the business impact of the model.

What steps should I take based on this information?

  • You can check if there are any lightweight changes you can make to help recover performance—for example, you could try modifying the decision threshold.

  • Retraining the model with the latest data and redeploying it is usually the solution that yields the best results, although it may be time-consuming and expensive.

For changes in model performance—again, the best way to cross-verify the results is by checking the . Once you confirm that the performance issue is not due to the data, you need to assess if the change in performance is due to temporary factors, or due to longer-lasting issues.

Data Drift Tab

Questions? to a product expert or a demo.

Need help? Contact us at .

❓
💡
Talk
request
help@fiddler.ai