LogoLogo
πŸ‘¨β€πŸ’» API ReferenceπŸ“£ Release NotesπŸ“Ί Request a Demo
  • Introduction to Fiddler
    • Monitor, Analyze, and Protect your ML Models and Gen AI Applications
  • Fiddler Doc Chatbot
  • First Steps
    • Getting Started With Fiddler Guardrails
    • Getting Started with LLM Monitoring
    • Getting Started with ML Model Observability
  • Tutorials & Quick Starts
    • LLM and GenAI
      • LLM Evaluation - Compare Outputs
      • LLM Monitoring - Simple
    • Fiddler Free Guardrails
      • Guardrails - Quick Start Guide
      • Guardrails - Faithfulness
      • Guardrails - Safety
      • Guardrails FAQ
    • ML Observability
      • ML Monitoring - Simple
      • ML Monitoring - NLP Inputs
      • ML Monitoring - Class Imbalance
      • ML Monitoring - Model Versions
      • ML Monitoring - Ranking
      • ML Monitoring - Regression
      • ML Monitoring - Feature Impact
      • ML Monitoring - CV Inputs
  • Glossary
    • Product Concepts
      • Baseline
      • Custom Metric
      • Data Drift
      • Embedding Visualization
      • Fiddler Guardrails
      • Fiddler Trust Service
      • LLM and GenAI Observability
      • Metric
      • Model Drift
      • Model Performance
      • ML Observability
      • Trust Score
  • Product Guide
    • LLM Application Monitoring & Protection
      • LLM-Based Metrics
      • Embedding Visualizations for LLM Monitoring and Analysis
      • Selecting Enrichments
      • Enrichments (Private Preview)
      • Guardrails for Proactive Application Protection
    • Optimize Your ML Models and LLMs with Fiddler's Comprehensive Monitoring
      • Alerts
      • Package-Based Alerts (Private Preview)
      • Class Imbalanced Data
      • Enhance ML and LLM Insights with Custom Metrics
      • Data Drift: Monitor Model Performance Changes with Fiddler's Insights
      • Ensuring Data Integrity in ML Models And LLMs
      • Embedding Visualization With UMAP
      • Fiddler Query Language
      • Model Versions
      • How to Effectively Use the Monitoring Chart UI
      • Performance Tracking
      • Model Segments: Analyze Cohorts for Performance Insights and Bias Detection
      • Statistics
      • Monitoring ML Model and LLM Traffic
      • Vector Monitoring
    • Enhance Model Insights with Fiddler's Slice and Explain
      • Events Table in RCA
      • Feature Analytics Creation
      • Metric Card Creation
      • Performance Charts Creation
      • Performance Charts Visualization
    • Master AI Monitoring: Create, Customize, and Compare Dashboards
      • Creating Dashboards
      • Dashboard Interactions
      • Dashboard Utilities
    • Adding and Editing Models in the UI
      • Model Editor UI
      • Model Schema Editing Guide
    • Fairness
    • Explainability
      • Model: Artifacts, Package, Surrogate
      • Global Explainability: Visualize Feature Impact and Importance in Fiddler
      • Point Explainability
      • Flexible Model Deployment
        • On Prem Manual Flexible Model Deployment XAI
  • Technical Reference
    • Python Client API Reference
    • Python Client Guides
      • Installation and Setup
      • Model Onboarding
        • Create a Project and Onboard a Model for Observation
        • Model Task Types
        • Customizing your Model Schema
        • Specifying Custom Missing Value Representations
      • Publishing Inference Data
        • Creating a Baseline Dataset
        • Publishing Batches Of Events
        • Publishing Ranking Events
        • Streaming Live Events
        • Updating Already Published Events
        • Deleting Events From Fiddler
      • Creating and Managing Alerts
      • Explainability Examples
        • Adding a Surrogate Model
        • Uploading Model Artifacts
        • Updating Model Artifacts
        • ML Framework Examples
          • Scikit Learn
          • Tensorflow HDF5
          • Tensorflow Savedmodel
          • Xgboost
        • Model Task Examples
          • Binary Classification
          • Multiclass Classification
          • Regression
          • Uploading A Ranking Model Artifact
    • Integrations
      • Data Pipeline Integrations
        • Airflow Integration
        • BigQuery Integration
        • Integration With S3
        • Kafka Integration
        • Sagemaker Integration
        • Snowflake Integration
      • ML Platform Integrations
        • Integrate Fiddler with Databricks for Model Monitoring and Explainability
        • Datadog Integration
        • ML Flow Integration
      • Alerting Integrations
        • PagerDuty Integration
    • Comprehensive REST API Reference
      • Projects REST API Guide
      • Model REST API Guide
      • File Upload REST API Guide
      • Custom Metrics REST API Guide
      • Segments REST API Guide
      • Baselines REST API Guide
      • Jobs REST API Guide
      • Alert Rules REST API Guide
      • Environments REST API Guide
      • Explainability REST API Guide
      • Server Info REST API Guide
      • Events REST API Guide
      • Fiddler Trust Service REST API Guide
    • Fiddler Free Guardrails Documentation
  • Configuration Guide
    • Authentication & Authorization
      • Adding Users
      • Overview of Role-Based Access Control
      • Email Authentication
      • Okta OIDC SSO Integration
      • Azure AD OIDC SSO Integration
      • Ping Identity SAML SSO Integration
      • Mapping LDAP Groups & Users to Fiddler Teams
    • Application Settings
    • Supported Browsers
  • History
    • Release Notes
    • Python Client History
    • Compatibility Matrix
    • Product Maturity Definitions
Powered by GitBook

Β© 2024 Fiddler Labs, Inc.

On this page
  • Definitions of Fairness
  • Fairness Metrics
  • Disparate Impact
  • Demographic Parity
  • Group Benefit
  • Equal Opportunity
  • Intersectional Fairness
  • Model Behavior
  • Reference

Was this helpful?

  1. Product Guide

Fairness

PreviousModel Schema Editing GuideNextExplainability

Last updated 2 months ago

Was this helpful?

Thanks to Fiddler charts visuals and our , you can define your fairness metric of choice to detect model bias on your dataset or production data and set up alerts.

Definitions of Fairness

Models are trained on real-world examples to mimic past outcomes on unseen data. The training data could be biased, which means the model will perpetuate the biases in the decisions it makes.

While there is not a universally agreed upon definition of fairness, we define a β€˜fair’ ML model as a model that does not favor any group of people based on their characteristics.

Ensuring fairness is key before deploying a model in production. For example, in the US, the government prohibited discrimination in credit and real-estate transactions with fair lending laws like the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHAct).

The Equal Employment Opportunity Commission (EEOC) acknowledges 12 factors of discrimination: age, disability, equal pay/compensation, genetic information, harassment, national origin, pregnancy, race/color, religion, retaliation, sex, sexual harassment. These are what we call protected attributes.

Fairness Metrics

Among others, some popular metrics are:

  • Disparate Impact

  • Group Benefit

  • Equal Opportunity

  • Demographic Parity

The choice of the metric is use case-dependent. An important point to make is that it's impossible to optimize all the metrics at the same time. This is something to keep in mind when analyzing fairness metrics.

Disparate Impact

Disparate impact is a form of indirect and unintentional discrimination, in which certain decisions disproportionately affect members of a protected group.

Mathematically, disparate impact compares the pass rate of one group to that of another.

The pass rate is the rate of positive outcomes for a given group. It's defined as follows:

pass rate = passed / (num of ppl in the group)

Disparate impact is calculated by:

DI = (pass rate of group 1) / (pass rate of group 2)

Groups 1 and 2 are interchangeable. Therefore, the following formula can be used to calculate disparate impact:

DI = min{pr_1, pr_2} / max{pr_1, pr_2}.

The disparate impact value is between 0 and 1. The Four-Fifths rule states that the disparate impact has to be greater than 80%.

For example:

pass-rate_1 = 0.3, pass-rate_2 = 0.4, DI = 0.3/0.4 = 0.75

pass-rate_1 = 0.4, pass-rate_2 = 0.3, DI = 0.3/0.4 = 0.75

πŸ“˜ Info

Disparate impact is the only legal metric available. The other metrics are not yet codified in US law.

Demographic Parity

Demographic parity states that the proportion of each segment of a protected class should receive the positive outcome at equal rates.

Mathematically, demographic parity compares the pass rate of two groups.

The pass rate is the rate of positive outcome for a given group. It is defined as follow:

pass rate = passed / (num of ppl in the group)

If the decisions are fair, the pass rates should be the same.

Group Benefit

Group benefit aims to measure the rate at which a particular event is predicted to occur within a subgroup compared to the rate at which it actually occurs.

Mathematically, group benefit for a given group is defined as follows:

Group Benefit = (TP+FP) / (TP + FN).

Group benefit equality compares the group benefit between two groups. If the two groups are treated equally, the group benefit should be the same.

Equal Opportunity

Equal opportunity means that all people will be treated equally or similarly and not disadvantaged by prejudices or bias.

Mathematically, equal opportunity compares the true positive rate (TPR) between two groups. TPR is the probability that an actual positive will test positive. The true positive rate is defined as follows:

TPR = TP/(TP+FN)

If the two groups are treated equally, the TPR should be the same.

Intersectional Fairness

We believe fairness should be ensured to all subgroups of the population. We extended the classical metrics (which are defined for two classes) to multiple classes. In addition, we allow multiple protected features (e.g. race and gender). By measuring fairness along overlapping dimensions, we introduce the concept of intersectional fairness.

To understand why we decided to go with intersectional fairness, we can take a simple example. In the figure below, we observe that equal numbers of black and white people pass. Similarly, there is an equal number of men and women passing. However, this classification is unfair because we don’t have any black women and white men that passed, and all black men and white women passed. Here, we observe bias within subgroups when we take race and gender as protected attributes.

Model Behavior

In addition to the fairness metrics, we suggest tracking model outcomes and model performance for each subgroup of population.

Reference

The EEOC provides examples of past intersectional discrimination/harassment cases.

For more details about the implementation of the intersectional framework, please refer to this .

[^1]: USEEOC article on [^2]: USEEOC article on

research paper
Discrimination By Type
Intersectional Discrimination/Harassment
custom metrics
[1]
[2]

Questions? to a product expert or a demo.

Need help? Contact us at .

❓
πŸ’‘
Talk
request
help@fiddler.ai