LogoLogo
👨‍💻 API Reference📣 Release Notes📺 Request a Demo
  • Introduction to Fiddler
    • Monitor, Analyze, and Protect your ML Models and Gen AI Applications
  • Fiddler Doc Chatbot
  • First Steps
    • Getting Started With Fiddler Guardrails
    • Getting Started with LLM Monitoring
    • Getting Started with ML Model Observability
  • Tutorials & Quick Starts
    • LLM and GenAI
      • LLM Evaluation - Compare Outputs
      • LLM Monitoring - Simple
    • Fiddler Free Guardrails
      • Guardrails - Quick Start Guide
      • Guardrails - Faithfulness
      • Guardrails - Safety
      • Guardrails FAQ
    • ML Observability
      • ML Monitoring - Simple
      • ML Monitoring - NLP Inputs
      • ML Monitoring - Class Imbalance
      • ML Monitoring - Model Versions
      • ML Monitoring - Ranking
      • ML Monitoring - Regression
      • ML Monitoring - Feature Impact
      • ML Monitoring - CV Inputs
  • Glossary
    • Product Concepts
      • Baseline
      • Custom Metric
      • Data Drift
      • Embedding Visualization
      • Fiddler Guardrails
      • Fiddler Trust Service
      • LLM and GenAI Observability
      • Metric
      • Model Drift
      • Model Performance
      • ML Observability
      • Trust Score
  • Product Guide
    • LLM Application Monitoring & Protection
      • LLM-Based Metrics
      • Embedding Visualizations for LLM Monitoring and Analysis
      • Selecting Enrichments
      • Enrichments (Private Preview)
      • Guardrails for Proactive Application Protection
    • Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring
      • Alerts
      • Package-Based Alerts (Private Preview)
      • Class Imbalanced Data
      • Enhance ML and LLM Insights with Custom Metrics
      • Data Drift: Monitor Model Performance Changes with Fiddler's Insights
      • Ensuring Data Integrity in ML Models And LLMs
      • Embedding Visualization With UMAP
      • Fiddler Query Language
      • Model Versions
      • How to Effectively Use the Monitoring Chart UI
      • Performance Tracking
      • Model Segments: Analyze Cohorts for Performance Insights and Bias Detection
      • Statistics
      • Monitoring ML Model and LLM Traffic
      • Vector Monitoring
    • Enhance Model Insights with Fiddler's Slice and Explain
      • Events Table in RCA
      • Feature Analytics Creation
      • Metric Card Creation
      • Performance Charts Creation
      • Performance Charts Visualization
    • Master AI Monitoring: Create, Customize, and Compare Dashboards
      • Creating Dashboards
      • Dashboard Interactions
      • Dashboard Utilities
    • Adding and Editing Models in the UI
      • Model Editor UI
      • Model Schema Editing Guide
    • Fairness
    • Explainability
      • Model: Artifacts, Package, Surrogate
      • Global Explainability: Visualize Feature Impact and Importance in Fiddler
      • Point Explainability
      • Flexible Model Deployment
        • On Prem Manual Flexible Model Deployment XAI
  • Technical Reference
    • Python Client API Reference
    • Python Client Guides
      • Installation and Setup
      • Model Onboarding
        • Create a Project and Onboard a Model for Observation
        • Model Task Types
        • Customizing your Model Schema
        • Specifying Custom Missing Value Representations
      • Publishing Inference Data
        • Creating a Baseline Dataset
        • Publishing Batches Of Events
        • Publishing Ranking Events
        • Streaming Live Events
        • Updating Already Published Events
        • Deleting Events From Fiddler
      • Creating and Managing Alerts
      • Explainability Examples
        • Adding a Surrogate Model
        • Uploading Model Artifacts
        • Updating Model Artifacts
        • ML Framework Examples
          • Scikit Learn
          • Tensorflow HDF5
          • Tensorflow SavedModel
          • XGBoost
        • Model Task Examples
          • Binary Classification
          • Multiclass Classification
          • Regression
          • Uploading A Ranking Model Artifact
      • Naming Convention Guidelines
    • Integrations
      • Data Pipeline Integrations
        • Airflow Integration
        • BigQuery Integration
        • Integration With S3
        • Kafka Integration
        • Sagemaker Integration
        • Snowflake Integration
      • ML Platform Integrations
        • Integrate Fiddler with Databricks for Model Monitoring and Explainability
        • Datadog Integration
        • ML Flow Integration
      • Alerting Integrations
        • PagerDuty Integration
    • Comprehensive REST API Reference
      • Projects REST API Guide
      • Model REST API Guide
      • File Upload REST API Guide
      • Custom Metrics REST API Guide
      • Segments REST API Guide
      • Baselines REST API Guide
      • Jobs REST API Guide
      • Alert Rules REST API Guide
      • Environments REST API Guide
      • Explainability REST API Guide
      • Server Info REST API Guide
      • Events REST API Guide
      • Fiddler Trust Service REST API Guide
    • Fiddler Free Guardrails Documentation
  • Configuration Guide
    • Authentication & Authorization
      • Adding Users
      • Overview of Role-Based Access Control
      • Email Authentication
      • Okta OIDC SSO Integration
      • Azure AD OIDC SSO Integration
      • Ping Identity SAML SSO Integration
      • Google OIDC SSO Integration
      • Mapping LDAP Groups & Users to Fiddler Teams
    • Application Settings
    • Supported Browsers
  • History
    • Release Notes
    • Python Client History
    • Compatibility Matrix
    • Product Maturity Definitions
Powered by GitBook

© 2024 Fiddler Labs, Inc.

On this page

Was this helpful?

  1. Product Guide
  2. Adding and Editing Models in the UI

Model Schema Editing Guide

PreviousModel Editor UINextFairness

Last updated 3 months ago

Was this helpful?

Note:

  • The UI-based Model Editor feature is currently in

  • Available as of v25.4

Overview

This guide explains how to edit your model's schema in Fiddler to better align with production data. Schema editing helps you maintain accurate monitoring as your data evolves.

Key capabilities

  • Adjust numeric feature ranges when real-world data deviates from your original sample data

  • Edit categorical feature values to add or remove categories as new patterns emerge

  • Add metadata columns to include additional contextual information for improved insights

Adjusting numeric feature ranges

Access the Schema tab

  1. Navigate to the Model Page of your desired model

  2. Select the Schema tab

Edit numeric column range

  1. Find the numeric column you want to adjust

  2. Select the edit icon (✏️) next to the column name

  3. In the dialog box, modify the minimum and/or maximum values

  4. Select Update to save your changes

Impact of changes

  • Data drift metrics: Changes apply to all data, including historical data

    • A job will run to recalculate aggregates and update metrics

  • Data integrity metrics: Changes only apply to new data going forward

Editing categorical variables

Access the Schema tab

  1. Navigate to the Model Page of your desired model

  2. Select the Schema tab

Edit categorical column

  1. Locate the categorical column you want to modify

  2. Select the edit icon (✏️) next to the column name

  3. Add or remove categories as needed

  4. Select Update to save your changes

Impact of changes

  • For both data drift and data integrity metrics:

    • Changes only apply to new data going forward

    • Historical data remains unchanged

Adding metadata columns

Access the Schema tab

  1. Navigate to the Model Page of your desired model

  2. Select the Schema tab

Add a Metadata Column

  1. Select Add Metadata

  2. Provide the required information:

    • Column Name: Specify the name of the new metadata column

    • Data Type: Choose a data type (integer, float, string, or boolean)

    • Range: For numeric types, define minimum and maximum values

  3. Select Add to save

Impact of Changes

  • New metadata columns are effective immediately for new data

Best Practices

  • Analyze production data to set realistic range values and identify useful metadata columns

  • Monitor metrics after adjustments to ensure changes effectively address your needs

  • Use annotations for transparency to maintain a clear history of schema changes

Frequently Asked Questions

Can I change column names or data types?

No, changing column names or data types is not supported.

What if I make a mistake?

You can edit the values again and save the updated schema.

How long do changes take to apply?

Application time depends on dataset size and complexity. For example, processing 10 million rows over six months takes approximately 12 minutes.

Can I delete a metadata column?

No, metadata columns cannot be deleted once added.

What happens if I add a category that doesn't exist in the data?

The category will be listed but won't impact existing calculations.

🚧
public preview