Fairness

In the context of intersectional fairness, we compute the fairness metrics for each subgroup. The values should be similar among subgroups. If there exists some bias in the model, we display the min-max ratio, which takes the minimum value divided by the maximum value for a given metric. If this ratio is close to 1, then the metric is very similar among subgroups. The figure below gives an example of two protected attributes, Gender and Education, and the Equal Opportunity metric.

For the Disparate Impact metric, we don’t display a min-max ratio but an absolute min. The intersectional version of this metric is a little different. For a given subgroup, take all possible permutations of 2 subgroups and then display the minimum. If the absolute minimum is greater than 80%, then all combinations are greater than 80%.

Model Behavior

In addition to the fairness metrics, we provide information about model outcomes and model performance for each subgroup. In the platform, you can see a visualization like the one below by default. You have the option to display the same numbers in a table for a deeper analysis.

Dataset Fairness

Finally, we provide a section for dataset fairness, with a mutual information matrix and a label distribution. Note that this is a pre-modeling step.

Mutual information gives information about existing dependence in your dataset between the protected attributes and the remaining features. We are displaying Normalized Mutual Information (NMI). This metric is symmetric, and has values between 0 and 1, where 1 means perfect dependency.

For more details about the implementation of the intersectional framework, please refer to this research paper.

Reference

[^1]: USEEOC article on Discrimination By Type [^2]: USEEOC article on Intersectional Discrimination/Harassment

↪ Questions? Join our community Slack to talk to a product expert

Last updated

© 2024 Fiddler AI