Monitoring

Fiddler Monitoring helps you identify issues with the performance of your ML models after deployment. Fiddler Monitoring has five main features:

  1. Data Drift
  2. Performance
  3. Data Integrity
  4. Service Metrics
  5. Alerts

Integrate with Fiddler Monitoring

Integrating Fiddler monitoring is a four-step process:

  1. Upload dataset

    Fiddler needs a dataset to be used as a baseline for monitoring. A dataset can be uploaded to Fiddler using our UI and Python package. For more information, see:

  2. Onboard model

    Fiddler needs some specifications about your model in order to help you troubleshoot production issues. Fiddler supports a wide variety of model formats. For more information, see:

  3. Configure monitoring for this model

    You will need to configure bins and alerts for your model. These will be discussed in detail below.

  4. Send traffic from your live deployed model to Fiddler

    Use the Fiddler SDK to send us traffic from your live deployed model.

Publish events to Fiddler

In order to send traffic to Fiddler, use the publish_event API from the Fiddler SDK.

The publish_event API can be called in real-time right after your model inference.

An event can contain the following:

  • Inputs
  • Outputs
  • Target
  • Decisions (categorical only)
  • Metadata

These aspects of an event can be monitored on the platform.

πŸ“˜

Info

You can also publish events as part of a batch call after the fact using the publish_events_batch API (click here for more information). In this case, you will need to send Fiddler the original event timestamps as to accurately populate the time series charts.

Updating events

Fiddler supports partial updates of events for your target column. This can be useful when you don’t have access to the ground truth for your model at the time the model's prediction is made. Other columns can only be sent at insertion time (with update_event=False).

Reference

[^1]: Join our community Slack to ask any questions