LogoLogo
👨‍💻 API Reference📣 Release Notes📺 Request a Demo
  • Introduction to Fiddler
    • Monitor, Analyze, and Protect your ML Models and Gen AI Applications
  • Fiddler Doc Chatbot
  • First Steps
    • Getting Started With Fiddler Guardrails
    • Getting Started with LLM Monitoring
    • Getting Started with ML Model Observability
  • Tutorials & Quick Starts
    • LLM and GenAI
      • LLM Evaluation - Compare Outputs
      • LLM Monitoring - Simple
    • Fiddler Free Guardrails
      • Guardrails - Quick Start Guide
      • Guardrails - Faithfulness
      • Guardrails - Safety
      • Guardrails FAQ
    • ML Observability
      • ML Monitoring - Simple
      • ML Monitoring - NLP Inputs
      • ML Monitoring - Class Imbalance
      • ML Monitoring - Model Versions
      • ML Monitoring - Ranking
      • ML Monitoring - Regression
      • ML Monitoring - Feature Impact
      • ML Monitoring - CV Inputs
  • Glossary
    • Product Concepts
      • Baseline
      • Custom Metric
      • Data Drift
      • Embedding Visualization
      • Fiddler Guardrails
      • Fiddler Trust Service
      • LLM and GenAI Observability
      • Metric
      • Model Drift
      • Model Performance
      • ML Observability
      • Trust Score
  • Product Guide
    • LLM Application Monitoring & Protection
      • LLM-Based Metrics
      • Embedding Visualizations for LLM Monitoring and Analysis
      • Selecting Enrichments
      • Enrichments (Private Preview)
      • Guardrails for Proactive Application Protection
    • Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring
      • Alerts
      • Package-Based Alerts (Private Preview)
      • Class Imbalanced Data
      • Enhance ML and LLM Insights with Custom Metrics
      • Data Drift: Monitor Model Performance Changes with Fiddler's Insights
      • Ensuring Data Integrity in ML Models And LLMs
      • Embedding Visualization With UMAP
      • Fiddler Query Language
      • Model Versions
      • How to Effectively Use the Monitoring Chart UI
      • Performance Tracking
      • Model Segments: Analyze Cohorts for Performance Insights and Bias Detection
      • Statistics
      • Monitoring ML Model and LLM Traffic
      • Vector Monitoring
    • Enhance Model Insights with Fiddler's Slice and Explain
      • Events Table in RCA
      • Feature Analytics Creation
      • Metric Card Creation
      • Performance Charts Creation
      • Performance Charts Visualization
    • Master AI Monitoring: Create, Customize, and Compare Dashboards
      • Creating Dashboards
      • Dashboard Interactions
      • Dashboard Utilities
    • Adding and Editing Models in the UI
      • Model Editor UI
      • Model Schema Editing Guide
    • Fairness
    • Explainability
      • Model: Artifacts, Package, Surrogate
      • Global Explainability: Visualize Feature Impact and Importance in Fiddler
      • Point Explainability
      • Flexible Model Deployment
        • On Prem Manual Flexible Model Deployment XAI
  • Technical Reference
    • Python Client API Reference
    • Python Client Guides
      • Installation and Setup
      • Model Onboarding
        • Create a Project and Onboard a Model for Observation
        • Model Task Types
        • Customizing your Model Schema
        • Specifying Custom Missing Value Representations
      • Publishing Inference Data
        • Creating a Baseline Dataset
        • Publishing Batches Of Events
        • Publishing Ranking Events
        • Streaming Live Events
        • Updating Already Published Events
        • Deleting Events From Fiddler
      • Creating and Managing Alerts
      • Explainability Examples
        • Adding a Surrogate Model
        • Uploading Model Artifacts
        • Updating Model Artifacts
        • ML Framework Examples
          • Scikit Learn
          • Tensorflow HDF5
          • Tensorflow Savedmodel
          • Xgboost
        • Model Task Examples
          • Binary Classification
          • Multiclass Classification
          • Regression
          • Uploading A Ranking Model Artifact
    • Integrations
      • Data Pipeline Integrations
        • Airflow Integration
        • BigQuery Integration
        • Integration With S3
        • Kafka Integration
        • Sagemaker Integration
        • Snowflake Integration
      • ML Platform Integrations
        • Integrate Fiddler with Databricks for Model Monitoring and Explainability
        • Datadog Integration
        • ML Flow Integration
      • Alerting Integrations
        • PagerDuty Integration
    • Comprehensive REST API Reference
      • Projects REST API Guide
      • Model REST API Guide
      • File Upload REST API Guide
      • Custom Metrics REST API Guide
      • Segments REST API Guide
      • Baselines REST API Guide
      • Jobs REST API Guide
      • Alert Rules REST API Guide
      • Environments REST API Guide
      • Explainability REST API Guide
      • Server Info REST API Guide
      • Events REST API Guide
      • Fiddler Trust Service REST API Guide
    • Fiddler Free Guardrails Documentation
  • Configuration Guide
    • Authentication & Authorization
      • Adding Users
      • Overview of Role-Based Access Control
      • Email Authentication
      • Okta OIDC SSO Integration
      • Azure AD OIDC SSO Integration
      • Ping Identity SAML SSO Integration
      • Mapping LDAP Groups & Users to Fiddler Teams
    • Application Settings
    • Supported Browsers
  • History
    • Release Notes
    • Python Client History
    • Compatibility Matrix
    • Product Maturity Definitions
Powered by GitBook

© 2024 Fiddler Labs, Inc.

On this page

Was this helpful?

  1. Technical Reference
  2. Integrations
  3. ML Platform Integrations

Integrate Fiddler with Databricks for Model Monitoring and Explainability

PreviousML Platform IntegrationsNextDatadog Integration

Last updated 14 days ago

Was this helpful?

Fiddler allows your team to monitor, explain and analyze your models developed and deployed in by integrating with for model asset management and utilizing Databricks Spark environment for data management.

To validate and monitor models built on Databricks using Fiddler, you can follow these steps:

  1. Create a Fiddler

  2. Create a Fiddler using sample data or model information from MLflow

  3. Publish production data live or in

Prerequisites

This guide assumes you have:

  • A Databricks account and valid credentials

  • A Fiddler environment with an account and valid credentials

  • Know how to use the Fiddler Python client

Begin with a Databricks Notebook

Launch a from your workspace and run the following code:

!pip install -q fiddler-client
import fiddler as fdl

Now that you have the Fiddler library installed, you can connect to your Fiddler environment. You will need your authentication token from the tab in Application Settings.

URL = ""
AUTH_TOKEN = ""
fdl.init(url: str, token: str)
# The project.id is required when creating models
project = fdl.Project(name='YOUR_PROJECT_NAME')
project.create()

Creating the Fiddler Model

Quickest Option: Let Fiddler Automate Model Creation

sample_dataset = spark.read.table("YOUR_DATASET").select("*").toPandas()
# Define a ModelSpec which tells Fiddler what role each column 
# in your model schema serves.
model_spec = fdl.ModelSpec(
  inputs=['feature_input_column', ...],
  outputs=['output_column'],
  targets=['label_column'],
  metadata=['id_column', 'data_segment_column', ...],
)

# Identify the task your ML model performs as Fiddler will use this
# to generate the performance metrics appropriate to the task. 
# ModelTask.NOT_SET is also an option if performance metrics are not needed.
model_task = fdl.ModelTask.BINARY_CLASSIFICATION
task_params = fdl.ModelTaskParams(target_class_order=['no', 'yes'])

# Use Model.from_data() to define your model's ModelSchema automatically by
# passing the sample_dataset in the source parameter for schema inference.
model = fdl.Model.from_data(
    name='name_for_display_in_Fiddler',
    project_id=project.id,
    source=sample_dataset,
    spec=model_spec,
    task=model_task,
    task_params=task_params,
    event_id_col='your_unique_event_id_column',
    event_ts_col='event_timestamp_column'
)
# Create the model in Fiddler
model.create()

Option: Using the MLflow Model Registry

import mlflow 
from mlflow.tracking import MlflowClient

# Initiate MLFlow Client 
client = MlflowClient()

# Get the model URI
model_version_info = client.get_model_version(model_name, model_version)
model_uri = client.get_model_version_download_uri(model_name, model_version_info) 

#Get the Model Signature
mlflow_model_info = mlflow.models.get_model_info(model_uri)
model_inputs_schema = mlflow_model_info.signature.inputs.to_dict()
model_inputs = [ sub['name'] for sub in model_inputs_schema ]

Publishing Events

Now you can publish all the events from your models. You can do this in two ways:

Batch Models

If your models run batch processes with your models or your aggregate model outputs over a time frame, then you can use the table change feed from Databricks to select only the new events and send them to Fiddler:

import fiddler as fdl
from pyspark.sql import SparkSession

# Get the active Spark session
spark = SparkSession.builder.getOrCreate()

changes_df = (
    spark.read.format("delta")
    .option("readChangeFeed", "true")
    .option("startingVersion", last_version)
    .option("endingVersion", new_version)
    .table("inferences")
    .toPandas()
)

# Assumes an initialized Python client session and instantiated Model
job = model.publish(
    source=changes_df,
    environment=fdl.EnvType.PRODUCTION,
)
print(f'Initiated Production dataset upload with Job ID = {job.id}')

Live Models

For models with live predictions or real-time applications, you can add the following code snippet to your prediction pipeline and send every event to Fiddler in real-time:

# Turn your model's output in a pandas dataframe
example_event = model_output.toJSON().map(lambda x: json.loads(x)).collect()

# Assumes an initialized Python client session and instantiated Model
event_id = model.publish(
    source=example_event,
    environment=fdl.EnvType.PRODUCTION,
)
print(f'Published {event_id_list}')

Finally, you can set up a new using:

The quickest way to onboard a Fiddler model is to get a sample of data from which Fiddler can infer model schema and metadata. Ideally you will have baseline, testing, or training data that is representative of your model schema. Fiddler can infer your model schema from this sample dataset. You can download baseline or training data from a and share it with Fiddler as a baseline dataset:

Now that you have sample data, you can create a Fiddler model easily as documented and demonstrated in our . A rough outline of the steps follow:

Another option is manually construct your model's schema from the details contained in the MLflow registry. Using the you can query the model registry and get the model signature which describes the inputs and outputs as a dictionary. You can use this dictionary to build out the , , and objects which defines the tabular schema of your model.

Refer to this in GitHub which demonstrates manually defining your Fiddler model's schema.

project
delta table
example notebook
Databricks Workspace
MLflow
streaming
batches
connect and use
Databricks notebook
Simple Monitoring Quick Start Guide
MLflow API
here
project
model
Credentials
Model
ModelSchema
ModelSpec