Skip to content

Model Fairness

Info

Model Fairness is in preview mode. Contact us for early access.

Fiddler provides powerful visualizations and metrics to detect model bias. We support structured (tabular) models for classification tasks in the Fiddler GUI and via API. These visualizations are available for both production or dataset queries.

The user needs to upload beforehands:

  • the input data
  • the target
  • the predictions or a model

Definitions of Fairness

Models are trained on real-world examples to mimic past outcomes on unseen data. The training data could be biased, which means the model will perpetuate the biases in the decisions it makes.

While there is not a universally agreed upon definition of fairness, we define a ‘fair’ ML model that does not favour a group of people based on their characteristics.

Ensuring fairness is key before using a model into production. For example in the US, the government prohibited discrimination in credit and real-estate transactions with fair lending laws like Equal Credit Opportunity Act (ECOA) and The Fair Housing Act (FHAct).

The Equal Employment Opportunity Commission (EEOC) acknowledge 12 factors of discrimination1: age, disability, equal pay/compensation, genetic information, harassment, national origin, pragnancy, race/color, religion, retaliation, sex, sexual harassment. These are what we call protected attributes.

Fairness Metrics

There exist several fairness metrics. Fiddler provides the following metrics: Group Benefit, Equal Opportunity, Demographic Parity and Disparate Impact. The choice of the metric is use-case-dependent and has to be determined by the user. An important point to make, it is impossible to optimize all the metrics at the same time. This is something to keep in mind when analysing fairness metrics.

Disparate Impact

Disparate impact is a form of indirect and unintentional discrimination in which certain decisions disproportionately affect members of a protected group.

Mathematically, disparate impact compares the pass rate of one group versus another.

The pass rate is the rate of positive outcome for a given group. It is defined as follow:

pass rate = passed / (num of ppl in the group)

The Four-Fifths rule states that:

(pass rate of group 1) / (pass rate of group 2)

has to be greater than 80% (group 1 and 2 interchangeable). Disparate Impact value is between 0 and 1, then DI = min{pr_1, pr_2} / max{pr_1, pr_2}.

For example:

  • Example 1: pass-rate_1 = 0.3, pass-rate_2 = 0.4, DI = 0.3/0.4 = 0.75
  • Example 2: pass-rate_1 = 0.4, pass-rate_2 = 0.3, DI = 0.3/0.4 = 0.75

Note

Disparate impact is the only legal metric available. The other metrics are not yet codified in US law.

Demographic Parity

Demographic Parity states that the proportion of each segment of a protected class should receive the positive outcome at equal rates.

Mathematically, demographic parity compares the pass rate of two groups.

The pass rate is the rate of positive outcome for a given group. It is defined as follow:

pass rate = passed / (num of ppl in the group). If the decisions are fair, the pass rates should be the same.

Note

Disparate impact is the only legal metric available. Demographic parity is not yet codified in US law.

Group Benefit

Group benefit aims to measure the rate at which a particular event is predicted to occur within a subgroup compared to the rate at which it actually occurs.

Mathematically, group benefit for a given group is defined as follow:

Group Benefit = (TP+FP) / (TP + FN).

Group benefit equality compares the group benefit between two groups. If the two groups are treated equally, the group benefit should be the same.

Note

Disparate impact is the only legal metric available. Group benefit equality is not yet codified in US law.

Equal Opportunity

Equal opportunity means that all people will be treated equally or similarly and not disadvantaged by prejudices or bias.

Mathematically, equal opportunity compares true positive rate (TPR) between two groups. TPR is the probability that an actual positive will test positive. The true positive rate is defined as follow:

TPR = TP/(TP+FN).

If the two groups are treated equally, the TPR should be the same.

Note

Disparate impact is the only legal metric available. Equal opportunity is not yet codified in US law.

Intersectional Fairness

We believe fairness should be ensured to all subgroups of the population. We extended the classical metrics that are defined for two classes, to multiple classes. In addition, we allow multiple protected features, for example Race and Gender. By measuring fairness along overlapping dimensions, we introduce the concept of intersectional fairness.

To understand why we decided to go with intersectional fairness, we can take a simple example. In the figure below, we observe equal numbers of black and white people pass. Similarly, there is an equal number of men and women passing. However, this classification is unfair because we don’t have any black women and white men that passed, and all black men and white women passed. We observe some bias with the subgroups when we take Race and Gender as protected attributes.

Fairness

The EEOC provides examples of past intersectional discrimination/harassment cases2.

In the context of intersectional fairness, we compute the previous fairness metrics for each subgroup. The values should be similar among subgroups. To get an overall idea if there exists some bias in the model, we display the min-max ratio, which takes the minimum value divided by the maximum value for a given metric. If this ratio is close to 1, then the metric is very similar among subgroups. The figure below gives an example for two protected attributes, Gender and Education, and the Equal Opportunity metric

Fairness

For the Disparate Impact metric, we don’t display a min-max ratio but an absolute min. The intersectional version of this metric is a little different. For a given subgroup, take all possible permutations of 2 subgroups and then display the minimum. If the absolute minimum is greater than 80%, then all combinations are greater than 80%.

Model Behavior

In addition to the fairness metrics, we provide information about model outcomes and model performance per subgroups. In the platform, you can see visualization by default like the one below. You have the option to display those numbers in a table for a deeper analysis.

Fairness Fairness

Model Fairness

Finally, we provide a section for dataset fairness, with a mutual information matrix and labels distribution. Note that this step is a pre-modeling one.

Fairness

Mutual information gives information about existing dependence in your dataset between the protected attributes and the remaining features. We are displaying Normalized Mutual Information (NMI). This metric is symmetric, and has values between 0 and 1, where 1 means perfect dependencies.

Fairness

For more details about the implementation of the intersectional framework, please refer to this research paper.

Back to top