Monitoring
Fiddler Monitoring helps you identify issues with the performance of your ML models after deployment. Fiddler Monitoring has five main features:
- Data Drift
- Performance
- Data Integrity
- Service Metrics
- Alerts
Integrate with Fiddler Monitoring
Integrating Fiddler monitoring is a four-step process:
-
Upload dataset
Fiddler needs a dataset to be used as a baseline for monitoring. A dataset can be uploaded to Fiddler using our UI and Python package. For more information, see:
-
Onboard model
Fiddler needs some specifications about your model in order to help you troubleshoot production issues. Fiddler supports a wide variety of model formats. For more information, see:
-
Configure monitoring for this model
You will need to configure bins and alerts for your model. These will be discussed in detail below.
-
Send traffic from your live deployed model to Fiddler
Use the Fiddler SDK to send us traffic from your live deployed model.
Publish events to Fiddler
In order to send traffic to Fiddler, use the publish_event
API from the Fiddler SDK.
The publish_event
API can be called in real-time right after your model inference.
An event can contain the following:
- Inputs
- Outputs
- Target
- Decisions (categorical only)
- Metadata
These aspects of an event can be monitored on the platform.
Info
You can also publish events as part of a batch call after the fact using the
publish_events_batch
API (click here for more information). In this case, you will need to send Fiddler the original event timestamps as to accurately populate the time series charts.
Updating events
Fiddler supports partial updates of events for your target column. This can be useful when you donβt have access to the ground truth for your model at the time the model's prediction is made. Other columns can only be sent at insertion time (with update_event=False
).
Reference
- See our article on The Rise of MLOps Monitoring
[^1]: Join our community Slack to ask any questions
Updated 5 days ago