Integrate Fiddler with Databricks for Model Monitoring and Explainability
Fiddler allows your team to monitor, explain and analyze your models developed and deployed in Databricks Workspace by integrating with MLflow for model asset management and utilizing Databricks Spark environment for data management.
To validate and monitor models built on Databricks using Fiddler, you can follow these steps:
Prerequisites
This guide assumes you have:
A Databricks account and valid credentials
A Fiddler environment with an account and valid credentials
Know how to connect and use use the Fiddler Python client
Begin with a Databricks Notebook
Launch a Databricks notebook from your workspace and run the following code:
Now that you have the Fiddler library installed, you can connect to your Fiddler environment. You will need your authentication token from the Credentials tab in Application Settings.
Finally, you can set up a new project using:
Creating the Fiddler Model
Quickest Option: Let Fiddler Automate Model Creation
The quickest way to onboard a Fiddler model is to get a sample of data from which Fiddler can infer model schema and metadata. Ideally you will have baseline, testing, or training data that is representative of your model schema. Fiddler can infer your model schema from this sample dataset. You can download baseline or training data from a delta table and share it with Fiddler as a baseline dataset:
Now that you have sample data, you can create a Fiddler model easily as documented here and demonstrated in our Simple Monitoring Quick Start Guide. A rough outline of the steps follow:
Option: Using the MLflow Model Registry
Another option is manually construct your model's schema from the details contained in the MLflow registry. Using the MLflow API you can query the model registry and get the model signature which describes the inputs and outputs as a dictionary. You can use this dictionary to build out the Model, ModelSchema, and ModelSpec objects which defines the tabular schema of your model.
Refer to this example notebook in GitHub which demonstrates manually defining your Fiddler model's schema.
Publishing Events
Now you can publish all the events from your models. You can do this in two ways:
Batch Models
If your models run batch processes with your models or your aggregate model outputs over a time frame, then you can use the table change feed from Databricks to select only the new events and send them to Fiddler:
Live Models
For models with live predictions or real-time applications, you can add the following code snippet to your prediction pipeline and send every event to Fiddler in real-time:
Last updated
Was this helpful?