Fiddler Monitoring helps you identify issues with the performance of your ML models after deployment. Fiddler Monitoring has five Metric Types which can be monitored and alerted on:
- Data Drift
- Data Integrity
Integrating Fiddler monitoring is a four-step process:
Fiddler needs a dataset to be used as a baseline for monitoring. A dataset can be uploaded to Fiddler using our UI and Python package. For more information, see:
Fiddler needs some specifications about your model in order to help you troubleshoot production issues. Fiddler supports a wide variety of model formats. For more information, see:
Configure monitoring for this model
You will need to configure bins and alerts for your model. These will be discussed in detail below.
Send traffic from your live deployed model to Fiddler
Use the Fiddler SDK to send us traffic from your live deployed model.
In order to send traffic to Fiddler, use the
publish_event API from the Fiddler SDK.
publish_event API can be called in real-time right after your model inference.
An event can contain the following:
- Decisions (categorical only)
These aspects of an event can be monitored on the platform.
You can also publish events as part of a batch call after the fact using the
publish_events_batchAPI (click here for more information). In this case, you will need to send Fiddler the original event timestamps as to accurately populate the time series charts.
Fiddler supports partial updates of events for your target column. This can be useful when you don’t have access to the ground truth for your model at the time the model's prediction is made. Other columns can only be sent at insertion time (with
- See our article on The Rise of MLOps Monitoring
[^1]: Join our community Slack to ask any questions
Updated 2 months ago