Skip to content

Performance

Monitor_Performance

What is being tracked?

  • Decisions - The count of approved and unapproved requests made to the model. A 'decision' is calculated at the source (it is not inferred by Fiddler) and for each event should be published to Fiddler. See the publish_event API for details). For binary classification models, the 'decision' is usually determined using a threshold. For multi-class classification models, this is usually determined using the arg max value.

  • Performance metrics

    1. For Binary Classification models
      • Accuracy
      • True Positive Rate/Recall
      • False Positive Rate
      • Precision
      • F1 Score
      • Log loss
    2. For Multi-class classification models -
      • Accuracy
      • Log loss
    3. For regression models -
      • Coefficient of determination (R-squared)
      • Mean Squared Error
      • Mean Absolute Error

Why is it being tracked?

  • Model performance tells us how well a model is doing on its task
  • A poorly performing model has business implications
  • The volume of decisions made on the basis of the predictions give visibility into the business impact of the model

What steps should I take based on this information?

  • For decisions, if there is an increase or decrease in approvals, we can cross-check with the average prediction and prediction drift trendlines on the Drift tab. In general, the average prediction value should increase with an increase in the number of approvals, and vice-versa.
  • For changes in model performance, again the best way to cross-verify the results is by checking the Drift tab. The Outliers tab may also have relevant information, particularly if the volume of outliers has changed significantly. Once you confirm that the performance issue is not due to the data, you need to assess if the change in performance is due to temporary factors, or due to longer-lasting issues.
  • You can check if there are any lightweight changes you can make to help recover performance — for example, you could try modifying the decision threshold.
  • Retraining the model with the latest data and redeploying it is usually the solution that yields the best results, although it may be time-consuming and expensive.

Reference


  1. Join our community Slack to ask any questions 

Back to top