Product Tour

When you first login to Fiddler, you will land on the Single Pane of Glass product homepage. In this view you can visualize monitoring information for your models across all your projects. At the top of the page you will see an overview of the number of triggered alerts for Performance, Data Drift, and Data Integrity alerts. Beside these donuts charts, you will find the Recent Job Status card that let's you keep track of long running async jobs and whether they have failed, are in progress, or completed successfully. Below this, is the Monitoring Summary table which displays your models across different projects along with information on their traffic, drift, and number of triggered alerts.

You can also go to the Project page by clicking on the Projects tab from the top lever navigation bar. This page lists all the data science projects contained within Fiddler. For more information on these projects, see the Fiddler Samples section below. You can create new projects either within the UI (by clicking the “Add Project” button) or via the Fiddler Client .

Projects represent the distinct AI applications or use cases within your organization. Within Fiddler, they house all the models specific to a given application, and thus serve as a jumping-off point for the majority of Fiddler’s model monitoring and explainability features.

Go ahead and click on the Lending project to navigate to the Project Overview page.

Here you can see a list of the models contained within the Lending project, as well as a project dashboard to which various insights can be pinned. Go ahead and click the “logreg-all” model.

From the Model Overview page, you can view details about the model: its metadata (schema), the files in its model directory, and its features, which are sorted by impact (the degree to which each feature influences the model’s prediction score).

You can then navigate to the core monitoring and explainability capabilities within the platform. These include:

  • Monitor — Track and configure alerts on your model’s performance, data drift, data integrity, and overall service metrics. Read the Monitoring documentation for more details.
  • Analyze — Analyze the behavior of your model in aggregate or with respect to specific segments of your population. Read the Analytics documentation for more details.
  • Explain — Generate “point” or prediction-level explanations on your training or production data for insight into how each model decision was made. Read the Explainability documentation for more details.
  • Evaluate — View your model’s performance on its training and test sets for quick validation prior to deployment. Read the Evaluation documentation for more details.

Fiddler Samples

Fiddler Samples is a set of datasets and models that are preloaded into Fiddler. These samples are also available in the Samples git repo. They represent different data types, model frameworks, and machine learning techniques. See the table below for more details.

ProjectModelDatasetModel FrameworkAlgorithmModel TaskExplanation Algos
Bank ChurnBank ChurnTabularscikit-learnRandom ForestBinary ClassificationFiddler Shapley
Heart DiseaseHeart DiseaseTabularTensorflowBinary ClassificationFiddler Shapley, IG
IMDBImdb RnnTextTensorflowBiLSTMBinary ClassficationFiddler Shapley, IG
IrisIrisTabularscikit-learnLogistic RegressionMulti-class ClassificationFiddler Shapley
LendingLogreg-allTabularscikit-learnLogistic RegressionBinary ClassificationFiddler Shapley
Logreg-simpleTabularscikit-learnLogistic RegressionBinary ClassificationFiddler Shapley
Xgboost-simple-sagemakerTabularscikit-learnXGboostBinary ClassificationFiddler Shapley
NewsgroupChristianity Atheism ClassifierTextscikit-learnRandom ForestBinary ClassificationFiddler Shapley
Wine QualityLinear Model Wine RegressorTabularscikit-learnElastic NetRegressionFiddler Shapley
DNN Wine RegressorTabularTensorflowRegressionFiddler Shapley

See the README for more information.

[^1]: Join our community Slack to ask any questions


Did this page help you?