# Point Explainability

Fiddler provides powerful visualizations that can explain your model's behavior. These explanations can be queried at an individual prediction level in the **Explain** tab, at a model level in the **Analyze** tab or within the monitoring context in the **Monitor** tab.

Explanations are available in the UI for structured (tabular) and natural language (NLP) models. They are also supported via API using the fiddler-client Python package. Explanations are available for both production and baseline queries.

Fiddler’s explanations are interactive — you can change feature inputs and immediately view an updated prediction and explanation. We have productized several popular **explanation methods** to work fast and at scale:

**FiddlerSHAP and SHAP **: using Kernel SHAP implementations, are game-theory-based methods. They work for all models because they only require the ability to ask a model for predictions.

**Tree SHAP**: is not enabled by default but can be used for Tree-based models. This is a faster and model-specific method to approximate Shapley values.**Permutation**: is model-agnostic and can be applied to various types of predictive algorithms, such as linear regression, random forests, support vector machines, and neural networks. By randomly selecting*n*values for a feature to explain and look at the difference in prediction between the row to explain and the same row with that feature equal to the random value. We obtain the permutation impact for a feature by averaging the*n*difference in predictions.

For Models with continuous features:

**Zero-reset**: is a simple method which, for each feature, look at the difference in prediction between the row to explain and the same row with that feature equal to 0. In that case the reference dataset used for explanation is a single row, which considers having a feature equal to 0 is the neutral example.**Mean-reset**: is is also a simple method. This time, we use a reference dataset and look at the difference in prediction between the row to explain and the same row with that feature equal to the mean value from the reference dataset.

For Gradient-based models:

**Integrated Gradients (IG)**: is particularly performant for deep learning models with a large number of inputs. It requires the model’s prediction to be mathematically differentiable, and a prediction gradient must be made available to Fiddler.**Super Integrated Gradients (SIG)**: is a combination of IG and DRISE algorithms developed by the Fiddler to deliver fast explanations for unstructured data coming from models used in computer vision applications.

Custom Explainers:

This allows users to bring in their custom explanation algorithms that can be deployed using Fiddler to server model explanations like the out-of-the-box algorithms shared above.

These methods are discussed in more detail below.

In addition to the previous out-of-the-box explanation methods, Fiddler allows to bring your own explanation method. This can be customized in your model’s `package.py`

wrapper script.

### Tabular Models

For tabular models, Fiddler’s Point Explanation tool shows how any given model prediction can be attributed to its individual input features.

The following is an example of an explanation for a model predicting the likelihood of customer churn:

A brief tour of the features above:

: The explanation method is selected from the**Explanation Method****Explanation Type**dropdown.: The far left column contains the input vector. Each input can be adjusted.**Input Vector**: The box in the upper-left shows the model’s prediction for this input vector.**Model Prediction**If the model produces multiple outputs (e.g. probabilities in a multiclass classifier), you can click on the prediction field to select and explain any of the output components. This can be particularly useful when diagnosing misclassified examples.

: The colored bars on the right represent how the prediction is attributed to the individual feature inputs.**Feature Attributions**A positive value (blue bar) indicates a feature is responsible for driving the prediction in the positive direction.

A negative value (red bar) is responsible for driving the prediction in a negative direction.

: The thin colored line just above the bars shows the difference between the baseline prediction and the model prediction. The specifics of the baseline calculation vary with the explanation method, but usually, it's approximately the mean prediction of the training/reference data distribution (i.e. the dataset specified when importing the model into Fiddler). The baseline prediction represents a typical model prediction.**Baseline Prediction**

**Two numbers** accompany each feature’s attribution bar in the UI.

*The first number*is the**attribution**. The sum of these values over all features will always equal the difference between the model prediction and a baseline prediction value.*The second number*, the percentage in parentheses, is the**feature attribution divided by the sum of the absolute values of all the feature attributions**. This provides an easy to compare, relative measure of feature strength and directionality (notice that negative attributions have negative percentages) and is bounded by ±100%.

📘 Info

An input box labeled

“Top N”controls how many attributions are visible at once. If the values don’t add up as described above, it’s likely that weaker attributions are being filtered-out by this control.

Finally, it’s important to note that **feature attributions combine model behavior with characteristics of the data distribution**.

### Language (NLP) Models

For language models, Fiddler’s Point Explanation provides the word-level impact on the prediction score when using perturbative methods (SHAP and Fiddler); for the Integrated Gradients method, tokenization can be customized in your model’s `package.py`

wrapper script. The explanations are interactive—edit the text, and the explanation updates immediately.

Here is an example of an explanation of a prediction from a sentiment analysis model:

### Point Explanation Methods: How to Quantify Prediction Impact of a Feature?

**Introduction**

One strategy for explaining the prediction of a machine learning model is to measure the influence that each of its inputs have on the prediction made. This is called Feature Impact.

To measure Feature Impact, **additive attribution methods** can be quite powerful. Fiddler includes:

**SHAP**and**Fiddler SHAP**, which require only the ability to ask a model for predictions, and are thus suitable across all types of models; no knowledge of the model implementation is necessary.**Integrated Gradients**, a method that takes advantage of the gradient vector of the prediction, which is typically available in deep learning models, to efficiently explain complex models with large input dimensionality.

**Additive Attributions**

To explain a prediction with an additive attribution method, we look at how individual features contribute to the *prediction difference*. The prediction difference is a comparison between the prediction as a point in feature space (we refer to this as the *explain-point*), and a counterfactual baseline position (or a distribution of positions), representing an uninteresting or typical model inference.

Each feature is assigned a fraction of the prediction difference for which it is responsible. This fraction is called the feature attribution, and it’s what we show in our explanations.

Additive attribution methods have the following characteristics:

The sum of feature attributions always equals the prediction difference.

Features that have no effect on a model’s prediction receive a feature attribution of zero.

Features that have the identical effect receive the same attribution.

Features with mutual information share the attribution for any effect that information has on the prediction.

Additionally, each of these methods takes into account interactions between the features (e.g. two features that have no effect individually but in combination change the model output). This is explicitly built into the Shapley value formalism, and is captured in the path integral over gradients in Integrated Gradients.

**Shapley Values and their Approximation**

The Shapley value[1] (proposed by Lloyd Shapley in 1953) is one way to derive feature attributions. Shapley values distribute the total payoff of a collaborative game across a coalition of cooperating players. They are computed by tabulating the average gain in payoff when a particular player is added to the coalition, over all coalition sizes and permutations of players.

In our case, we consider the “total gains” to be the prediction value, and a “player” is a single model feature. The collaborative “game” is all of the model features cooperating to form a prediction value.

Why do we create “coalitions” with only a subset of the features? In some scenarios, it may be appropriate to replace a feature with a zero value when removed from the coalition (e.g. text models where no mask token is available). In others (e.g. models with dense tabular inputs), values are swapped in from a reference distribution or baseline example as a zero value may have a specific meaning (like zero income on a credit application).

Shapley values have desirable properties including:

: If two games are combined, then the total gains correspond to the gains derived from a linear combination of the gains of each game.**Linearity**: The sum of the values of all players equals the value of the grand coalition, so that all the gain is distributed among the players. In our case, the efficiency property says**Efficiency***the feature attributions should sum to the prediction value*. The attributions can be positive or negative, since a feature can raise or lower a predicted value.

**Approximating Shapley Values**

Computation of exact Shapley values can be extremely computationally expensive—in fact, exponentially so, in the number of input features. Fiddler makes two approximation methods available:

**SHAP**[2] (SHapely Additive exPlanations) approximates Shapley values by sampling coalitions according to a combinatorially weighted kernel (compensating for the number of permutations of features in coalitions of different cardinality). It samples the feature space uniformly between baseline-like feature vectors and explain-point-like feature vectors. This has the effect of downsampling behavior in the immediate vicinity of the explain-point, a region where the model may be saturated or uniform in its prediction, and attributions may not be helpful.**Fiddler SHAP**[3] builds on the SHAP approach and is optimized for computing distributions of Shapley values for each feature by comparing the explain-point against a distribution of baselines. This makes it possible to compute confidence intervals around the mean attribution for each feature and identify clusters in attribution space where distinct, individually relevant explanations might be important (e.g. “your loan application was rejected for a set of reasons when compared to applications in your region, and for another set of reasons when compared to applications with the same profession”).

Approximate Shapley value methods can be used to explain nearly any model, since you only need to be able to ask the model for predictions at a variety of positions in the feature space.

**Integrated Gradients**

Another additive attribution method: the Integrated Gradients method.

For models whose prediction is continuous and piecewise differentiable in the feature space, it can be useful to provide additional information through the gradient (slope vector) of a prediction.

Fiddler supports Integrated Gradients (IG)[4]. In this method, an approximate integral tabulates components of the slope along a linear path from baseline to explain-point, and attributes them to respective input features. This has several advantages:

For models with very high dimensional feature volumes (e.g. images, text), where differentiable deep-learning models typically excel, this method can be very performant (O(N) vs. the O(2^n) of the Shapley methods)

Attributions can be computed for intermediate layers within the model, providing fine-grained model diagnostics. This is naturally extensible to models with hybrid and multimodal inputs.

In comparison to local gradients and saliency methods, the IG path integral samples the large-scale behavior of the model and is resistant to amplifying noise in the possibly saturated region around the explain-point.

### References

S. Lundberg, S Lee. “A Unified Approach to Interpreting Model Predictions.” NeurIPS, 2017 http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf

L. Merrick and A. Taly “The Explanation Game: Explaining Machine Learning Models Using Shapley Values” https://arxiv.org/abs/1909.08128

M. Sundararajan, A. Taly, Q. Yan “Axiomatic Attribution for Deep Networks” http://proceedings.mlr.press/v70/sundararajan17a/sundararajan17a.pdf

↪ Questions? Join our community Slack to talk to a product expert

Last updated