LogoLogo
👨‍💻 API Reference📣 Release Notes📺 Request a Demo
  • Introduction to Fiddler
    • Monitor, Analyze, and Protect your ML Models and Gen AI Applications
  • Fiddler Doc Chatbot
  • First Steps
    • Getting Started With Fiddler Guardrails
    • Getting Started with LLM Monitoring
    • Getting Started with ML Model Observability
  • Tutorials & Quick Starts
    • LLM and GenAI
      • LLM Evaluation - Compare Outputs
      • LLM Monitoring - Simple
    • Fiddler Free Guardrails
      • Guardrails - Quick Start Guide
      • Guardrails - Faithfulness
      • Guardrails - Safety
      • Guardrails FAQ
    • ML Observability
      • ML Monitoring - Simple
      • ML Monitoring - NLP Inputs
      • ML Monitoring - Class Imbalance
      • ML Monitoring - Model Versions
      • ML Monitoring - Ranking
      • ML Monitoring - Regression
      • ML Monitoring - Feature Impact
      • ML Monitoring - CV Inputs
  • Glossary
    • Product Concepts
      • Baseline
      • Custom Metric
      • Data Drift
      • Embedding Visualization
      • Fiddler Guardrails
      • Fiddler Trust Service
      • LLM and GenAI Observability
      • Metric
      • Model Drift
      • Model Performance
      • ML Observability
      • Trust Score
  • Product Guide
    • LLM Application Monitoring & Protection
      • LLM-Based Metrics
      • Embedding Visualizations for LLM Monitoring and Analysis
      • Selecting Enrichments
      • Enrichments (Private Preview)
      • Guardrails for Proactive Application Protection
    • Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring
      • Alerts
      • Package-Based Alerts (Private Preview)
      • Class Imbalanced Data
      • Enhance ML and LLM Insights with Custom Metrics
      • Data Drift: Monitor Model Performance Changes with Fiddler's Insights
      • Ensuring Data Integrity in ML Models And LLMs
      • Embedding Visualization With UMAP
      • Fiddler Query Language
      • Model Versions
      • How to Effectively Use the Monitoring Chart UI
      • Performance Tracking
      • Model Segments: Analyze Cohorts for Performance Insights and Bias Detection
      • Statistics
      • Monitoring ML Model and LLM Traffic
      • Vector Monitoring
    • Enhance Model Insights with Fiddler's Slice and Explain
      • Events Table in RCA
      • Feature Analytics Creation
      • Metric Card Creation
      • Performance Charts Creation
      • Performance Charts Visualization
    • Master AI Monitoring: Create, Customize, and Compare Dashboards
      • Creating Dashboards
      • Dashboard Interactions
      • Dashboard Utilities
    • Adding and Editing Models in the UI
      • Model Editor UI
      • Model Schema Editing Guide
    • Fairness
    • Explainability
      • Model: Artifacts, Package, Surrogate
      • Global Explainability: Visualize Feature Impact and Importance in Fiddler
      • Point Explainability
      • Flexible Model Deployment
        • On Prem Manual Flexible Model Deployment XAI
  • Technical Reference
    • Python Client API Reference
    • Python Client Guides
      • Installation and Setup
      • Model Onboarding
        • Create a Project and Onboard a Model for Observation
        • Model Task Types
        • Customizing your Model Schema
        • Specifying Custom Missing Value Representations
      • Publishing Inference Data
        • Creating a Baseline Dataset
        • Publishing Batches Of Events
        • Publishing Ranking Events
        • Streaming Live Events
        • Updating Already Published Events
        • Deleting Events From Fiddler
      • Creating and Managing Alerts
      • Explainability Examples
        • Adding a Surrogate Model
        • Uploading Model Artifacts
        • Updating Model Artifacts
        • ML Framework Examples
          • Scikit Learn
          • Tensorflow HDF5
          • Tensorflow Savedmodel
          • Xgboost
        • Model Task Examples
          • Binary Classification
          • Multiclass Classification
          • Regression
          • Uploading A Ranking Model Artifact
    • Integrations
      • Data Pipeline Integrations
        • Airflow Integration
        • BigQuery Integration
        • Integration With S3
        • Kafka Integration
        • Sagemaker Integration
        • Snowflake Integration
      • ML Platform Integrations
        • Integrate Fiddler with Databricks for Model Monitoring and Explainability
        • Datadog Integration
        • ML Flow Integration
      • Alerting Integrations
        • PagerDuty Integration
    • Comprehensive REST API Reference
      • Projects REST API Guide
      • Model REST API Guide
      • File Upload REST API Guide
      • Custom Metrics REST API Guide
      • Segments REST API Guide
      • Baselines REST API Guide
      • Jobs REST API Guide
      • Alert Rules REST API Guide
      • Environments REST API Guide
      • Explainability REST API Guide
      • Server Info REST API Guide
      • Events REST API Guide
      • Fiddler Trust Service REST API Guide
    • Fiddler Free Guardrails Documentation
  • Configuration Guide
    • Authentication & Authorization
      • Adding Users
      • Overview of Role-Based Access Control
      • Email Authentication
      • Okta Integration
      • SSO with Azure AD
      • Ping Identity SAML SSO Integration
      • Mapping LDAP Groups & Users to Fiddler Teams
    • Application Settings
    • Supported Browsers
  • History
    • Release Notes
    • Python Client History
    • Compatibility Matrix
    • Product Maturity Definitions
Powered by GitBook

© 2024 Fiddler Labs, Inc.

On this page
  • Tabular Models
  • Language (NLP) Models

Was this helpful?

  1. Product Guide
  2. Explainability

Global Explainability: Visualize Feature Impact and Importance in Fiddler

Fiddler provides powerful visualizations to describe the impact of features in your model. Feature impact and importance can be found in the Explain tab.

Global explanations are available in the UI for structured (tabular) and natural language (NLP) models, for both classification and regression. They are also supported via API using the Fiddler Python package. Global explanations are available for both production and dataset queries.

Tabular Models

For tabular models, Fiddler’s Global Explanation tool shows the impact/importance of the features in the model.

Two global explanation methods are available:

  • Feature importance — Gives the average change in loss when a feature is randomly ablated.

  • Feature impact — Gives the average absolute change in the model prediction when a feature is randomly ablated (removed).

    • Custom Feature Impact

      • Overview

        • The Custom Feature Impact feature empowers you to provide your feature impact scores for your models, leveraging domain-specific knowledge or external data to inform feature importance. This feature enables you to upload custom feature impact data without requiring the corresponding model artifact.

      • How to Use

        • Prepare Your Data

          • Gather feature names and corresponding impact scores for your model.

          • Ensure impact scores are numeric values; negative values indicate inverse relationships.

        • Upload Feature Impact Data

          • Use the provided API endpoint to upload your data.

          • Required parameters:

            • Model UUID: Unique identifier of your model.

            • Feature Names: List of feature names.

            • Impact Scores: List of corresponding impact scores.

        • View Updated Model Information

          • After successful upload, updated feature impact data will be reflected in:

            • Model details page

            • Charts page

            • Explain page

          • Visualize feature impact scores in charts and explanations.

      • Important Notes

        • Error handling: API returns detailed error messages to help resolve issues.

        • Update existing feature impact data by uploading new data for the same model.

        • If you upload feature impact data for a model with an existing artifact, the artifact will be updated.

        • Missing feature impact data may display a tooltip or message; upload data manually or compute using other tools.

      • Methods:

        • upload_feature_impact: Accepts a dictionary of feature impact (key-value pairs of features and their impact) and an update flag (True or False).

      • Parameters:

        • feature_impact_map: Dictionary (key-value pairs of features and their impact)

        • update: Boolean (true or false)

      • Sample Usage :

        PROJECT_NAME = 'YOUR_PROJECT_NAME'
        MODEL_NAME = 'YOUR_MODEL_NAME'
        FEATURE_IMPACT_MAP = {'feature_1': 0.1, 'feature_2': 0.4}
        project = fdl.Project.from_name(name=PROJECT_NAME)
        model = fdl.Model.from_name(name=MODEL_NAME, project_id=project.id)
        feature_impacts = model.upload_feature_impact(feature_impact_map=FEATURE_IMPACT_MAP, update=False)

Language (NLP) Models

For language models, Fiddler’s Global Explanation performs ablation feature impact on a collection of text samples, determining which words have the most impact on the prediction.

📘 Info

PreviousModel: Artifacts, Package, SurrogateNextPoint Explainability

Last updated 1 month ago

Was this helpful?

For speed performance, Fiddler uses a random corpus of 200 documents from the dataset. When using the function from the Fiddler API client, the argument num_refs can be changed to use a bigger corpus of texts.


Questions? to a product expert or a demo.

Need help? Contact us at .

❓
💡
Talk
request
help@fiddler.ai
get_feature_importance