ML Flow Integration
Fiddler allows your team to onboard, monitor, explain, and analyze your models developed with MLflow.
This guide shows you how to ingest the model metadata and artifacts stored in your MLflow model registry and use them to set up model observability in the Fiddler Platform:
Exporting Model Metadata from MLflow to Fiddler
Uploading Model Artifacts to Fiddler for XAI
Onboarding a Model
Refer to this section of the Databricks integration guide for onboarding your model to Fiddler using model information from MLflow.
Uploading Model Artifacts
Using the MLflow API you can query the model registry and get the model signature which describes the inputs and outputs as a dictionary.
Uploading Model Files
Sharing your model artifacts helps Fiddler explain your models. By leveraging the MLflow API you can download these model files:
Once you have the model file, you can create a package.py file in this model directory that describes how to access this model.
Finally, you can upload all the model artifacts to Fiddler:
Alternatively, you can skip uploading your model and use Fiddler to generate a surrogate model to get low-fidelity explanations for your model.
Please refer to the Explainability guide for detailed information on model artifacts, packages, and surrogate models.
Last updated
Was this helpful?