Calculates fairness metrics for a model over a specified dataset.

Get fairness metrics for a model over a dataset.

Input Parameter

Type

Default

Description

project_id

str

None

The unique identifier for the project.

model_id

str

None

The unique identifier for the model.

dataset_id

str

None

The unique identifier for the dataset.

protected_features

list

None

A list of protected features.

positive_outcome

Union [str, int]

None

The name or value of the positive outcome for the model.

slice_query

Optional [str]

None

A SQL query. If specified, fairness metrics will only be calculated over the dataset slice specified by the query.

score_threshold

Optional [float]

0.5

The score threshold used to calculate model outcomes.

PROJECT_ID = 'example_project'
MODEL_ID = 'example_model'
DATASET_ID = 'example_dataset'

protected_features = [
    'feature_1',
    'feature_2'
]

positive_outcome = 1

fairness_metrics = client.run_fairness(
    project_id=PROJECT_ID,
    model_id=MODEL_ID,
    dataset_id=DATASET_ID,
    protected_features=protected_features,
    positive_outcome=positive_outcome
)
PROJECT_ID = 'example_project'
MODEL_ID = 'example_model'
DATASET_ID = 'example_dataset'

protected_features = [
    'feature_1',
    'feature_2'
]

positive_outcome = 1

slice_query = f""" SELECT * FROM "{DATASET_ID}.{MODEL_ID}" WHERE feature_1 < 20.0 LIMIT 100 """

fairness_metrics = client.run_fairness(
    project_id=PROJECT_ID,
    model_id=MODEL_ID,
    dataset_id=DATASET_ID,
    protected_features=protected_features,
    positive_outcome=positive_outcome,
    slice_query=slice_query
)

Return Type

Description

dict

A dictionary containing fairness metric results.