Here we will provide a brief overview of Fiddler’s UI. When you first load Fiddler, you land on the Home page, which contains an introductory video to the fiddler platform and some links to the doucmentation in case you like to have an in-depth look into fiddler platform. It also provides information on recently view projects, starred projects and more.
You can go to the Model Monitoring Summary page from the side tab, which will give details of traffic, drift, data integrity violations and triggered alerts on your models.
You can go to the Project page from the side tab, which lists all the data science projects contained within your instance of Fiddler. For more information on these projects, go to Fiddler Samples section below. You can create new projects either within the UI, by clicking the “Add Project” button, or via the Fiddler Python Client.
Projects represent the distinct AI applications or use cases within your organization. Within Fiddler, they house all the models specific to a given application, and thus serve as a jumping-off point to the majority of Fiddler’s model monitoring and explainability features.
Go ahead and click on the Lending project to navigate to its Project Overview.
Here you can see a list of the models contained within the Lending project, as well as a project dashboard to which various insights can be pinned. Go ahead and click the “logreg-all” model.
From the Model Overview page, you can view details about the model: its metadata (schema), the files in its model directory, and its features, sorted by impact (the degree to which each feature influences the model’s prediction score).
You can then navigate to the core monitoring and explainability capabilities within the platform. These include:
- Monitor - Track and configure alerts on your model’s performance, data drift & integrity; analyze outliers; and view overall service metrics. Read the monitoring documentation for details.
- Analyze - Analyze the behavior of your model in aggregate or with respect to specific segments of your population. Read the analytics documentation for details.
- Explain - Generate “point” or prediction-level explanations on your training or production data for insight into how each model decision was made. Read the explainability documentation for details.
- Evaluate - View your model’s performance on its training and test sets for quick validation prior to deployment. Read the evaluation documentation for details.
Fiddler Samples is a set of datasets and models that are preloaded into Fiddler Cloud or Onebox. These samples are also available in the Samples git repo, represent different data types, model frameworks, and machine learning techniques. See the table below for more details.
|Project||Model||Dataset||Model Framework||Algorithm||Model Task||Explanation Algos|
|Bank Churn||Bank Churn||Tabular||Scikit Learn||Random Forest||Binary Classification||Fiddler Shapley|
|Heart Disease||Heart Disease||Tabular||Tensorflow||Binary Classification||Fiddler Shapley, IG|
|IMDB||Imdb Rnn||Text||Tensorflow||BiLSTM||Binary Classfication||Fiddler Shapley, IG|
|Iris||Iris||Tabular||Scikit Learn||Logistic Regression||Multi-class Classification||Fiddler Shapley|
|Lending||Logreg-all||Tabular||Scikit Learn||Logistic Regression||Binary Classification||Fiddler Shapley|
|Logreg-simple||Tabular||Scikit Learn||Logistic Regression||Binary Classification||Fiddler Shapley|
|Xgboost-simple-sagemaker||Tabular||Scikit Learn||XGboost||Binary Classification||Fiddler Shapley|
|Newsgroup||Christianity Atheism Classifier||Text||Scikit Learn||Random Forest||Binary Classification||Fiddler Shapley|
|Wine Quality||Linear Model Wine Regressor||Tabular||Scikit Learn||Elastic Net||Regression||Fiddler Shapley|
|DNN Wine Regressor||Tabular||Tensorflow||Regression||Fiddler Shapley|
See the README for more details