client.get_fairness

Get fairness analysis on a dataset or a slice.

🚧

Only Binary classification models with categorical protected attributes are currently supported.

Input ParameterTypeDefaultDescription
project_idstrNoneThe unique identifier for the project.
model_idstrNoneThe unique identifier for the model.
data_sourceUnion[fdl.DatasetDataSource, fdl.SqlSliceQueryDataSource]NoneDataSource for the input dataset to compute fairness on (DatasetDataSource or SqlSliceQueryDataSource).
protected_featureslist[str]NoneA list of protected features.
positive_outcomeUnion[str, int, float, bool]NoneValue of the positive outcome (from the target column) for Fairness analysis.
score_thresholdOptional [float]0.5The score threshold used to calculate model outcomes.
PROJECT_ID = 'example_project'
MODEL_ID = 'example_model'
DATASET_ID = 'example_dataset'

# Fairness - Dataset data source
fairness_metrics = client.get_fairness(
    project_id=PROJECT_ID,
    model_id=MODEL_ID,
    data_source=fdl.DatasetDataSource(dataset_id=DATASET_ID, num_samples=200),
    protected_features=['feature_1', 'feature_2'],
    positive_outcome='Approved',
    score_threshold=0.6
)

# Fairness - Slice Query data source
query = f'SELECT * FROM {DATASET_ID}.{MODEL_ID} WHERE CreditSCore > 700'
fairness_metrics = client.get_fairness(
    project_id=PROJECT_ID,
    model_id=MODEL_ID,
    data_source=fdl.SqlSliceQueryDataSource(query=query, num_samples=200),
    protected_features=['feature_1', 'feature_2'],
    positive_outcome='Approved',
    score_threshold=0.6
)
Return TypeDescription
dictA dictionary containing fairness metric results.