API Methods 2.x
Last updated
Last updated
© 2024 Fiddler AI
The Client object is used to communicate with Fiddler. In order to use the client, you'll need to provide authentication details as shown below.
For more information, see Authorizing the Client.
Parameter | Type | Default | Description |
---|---|---|---|
🚧 Warning
If verbose is set to True, all information required for debugging will be logged, including the authorization token.
📘 Info
To maximize compatibility, please ensure that your Client Version matches the server version for your Fiddler instance.
When you connect to Fiddler using the code on the right, you'll receive a notification if there is a version mismatch between the client and server.
You can install a specific version of fiddler-client using pip:
pip install fiddler-client==X.X.X
If you want to authenticate with Fiddler without passing this information directly into the function call, you can store it in a file named_ fiddler.ini_, which should be stored in the same directory as your notebook or script.
Projects are used to organize your models and datasets. Each project can represent a machine learning task (e.g. predicting house prices, assessing creditworthiness, or detecting fraud).
A project can contain one or more models (e.g. lin_reg_house_predict, random_forest_house_predict).
For more information on projects, click here.
🚧 Caution
You cannot delete a project without deleting the datasets and the models associated with that project.
Datasets (or baseline datasets) are used for making comparisons with production data.
A baseline dataset should be sampled from your model's training set, so it can serve as a representation of what the model expects to see in production.
For more information, see Uploading a Baseline Dataset.
For guidance on how to design a baseline dataset, see Designing a Baseline Dataset.
🚧 Caution
You cannot delete a dataset without deleting the models associated with that dataset first.
A model is a representation of your machine learning model. Each model must have an associated dataset to be used as a baseline for monitoring, explainability, and fairness capabilities.
You do not need to upload your model artifact in order to onboard your model, but doing so will significantly improve the quality of explanations generated by Fiddler.
📘 Note
Before calling this function, you must have already added a model using
add_model
.
📘 Note
Before calling this function, you must have already added a model using
add_model
.
🚧 Surrogate models are not supported for input_type = fdl.ModelInputType.TEXT
For more information, see Uploading a Model Artifact.
❗️ Not supported with client 2.0 and above
Please use client.add_model() going forward.
❗️ Not supported with client 2.0 and above
This method is called automatically now when calling client.add_model_surrogate() or client.add_model_artifact().
For more information, see Uploading a Model Artifact.
🚧 Warning
This function does not allow for changes in a model's schema. The inputs and outputs to the model must remain the same.
📘 Note
Before calling this function, you must have already added a model using
add_model_surrogate
oradd_model_artifact
❗️ Not supported with client 2.0 and above
Please use client.add_model_artifact() going forward.
📘 Note
This method call cannot replace user uploaded model done using add_model_artifact. It can only re-generate a surrogate model
This can be used to re-generate a surrogate model for a model
Horizontal scaling: horizontal scaling via replicas parameter. This will create multiple Kubernetes pods internally to handle requests.
Vertical scaling: Model deployments support vertical scaling via cpu and memory parameters. Some models might need more memory to load the artifacts into memory or process the requests.
Scale down: You may want to scale down the model deployments to avoid allocating the resources when the model is not in use. Use active parameters to scale down the deployment.
Scale up: This will again create the model deployment Kubernetes pods with the resource values available in the database.
Supported from server version
23.1
and above with Flexible Model Deployment feature enabled.
Event publication is the process of sending your model's prediction logs, or events, to the Fiddler platform. Using the Fiddler Client, events can be published in batch or streaming mode. Using these events, Fiddler will calculate metrics around feature drift, prediction drift, and model performance. These events are also stored in Fiddler to allow for ad hoc segment analysis. Please read the sections that follow to learn more about how to use the Fiddler Client for event publication.
In this example, event_id
and inference_date
are columns in df_events. Both are optional. If not passed, we generate unique UUID and use current timestamp for event_timestmap
.
In case of update-event, id_field
is required as a unique identifier of the previous published events. For more details on which columns are eligible to be updated, refer to Updating Events.
get_baseline
helps get the configuration parameters of the existing baseline
Gets all the baselines in a project or attached to a single model within a project
Deletes an existing baseline from a project
📘 Info
add_monitoring_config can be applied at the model, project, or organization level.
If project_id and model_id are specified, the configuration will be applied at the model level.
If project_id is specified but model_id is not, the configuration will be applied at the project level.
If neither project_id nor model_id are specified, the configuration will be applied at the organization level.
📘 Info
The Fiddler client can be used to create a variety of alert rules. Rules can be of Data Drift, Performance, Data Integrity, and **Service Metrics ** types and they can be compared to absolute (compare_to = RAW_VALUE) or to relative values (compare_to = TIME_PERIOD).
Example responses:
📘 Info
The Fiddler client can be used to get a list of alert rules with respect to the filtering parameters.
📘 Info
The Fiddler client can be used to get a list of triggered alerts for given alert rule and time duration.
📘 Info
The Fiddler client can be used to get a list of triggered alerts for given alert rule and time duration.
📘 Info
The Fiddler client can be used to build notification configuration to be used while creating alert rules.
Example Response:
Example responses:
📘 Add Slack webhook
Use the Slack API reference to generate a webhook for your Slack App
Example responses:
Example Response
Example Response:
📘 Info
The Fiddler client can be used to update the notification status of multiple alerts at once.
Example responses:
For details on supported constants, operators, and functions, see Fiddler Query Language.
For details on supported constants, operators, and functions, see Fiddler Query Language.
📘 Info
Only read-only SQL operations are supported. Certain SQL operations like aggregations and joins might not result in a valid slice.
🚧 Only Binary classification models with categorical protected attributes are currently supported.
🚧 Warning
Only administrators can use client.list_org_roles() .
📘 Info
Administrators can share any project with any user. If you lack the required permissions to share a project, contact your organization administrator.
📘 Info
Administrators and project owners can unshare any project with any user. If you lack the required permissions to unshare a project, contact your organization administrator.
For information on how to customize these objects, see Customizing Your Dataset Schema.
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input | Parameter | Type | Default | Description |
---|---|---|---|---|
Return Type | Description |
---|---|
Input | Parameter | Type | Default | Description |
---|---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Required | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Required | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Required | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Required | Description |
---|---|---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Required | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Required | Description |
---|---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Required | Description |
---|---|---|---|
Input Parameter | Type | Required | Description |
---|---|---|---|
Input Parameter | Type | Required | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Required | Description |
---|---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Required | Description |
---|---|---|---|
Input Parameter | Type | Required | Description |
---|---|---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Return Type | Description |
---|---|
Input Paraemter | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Return Type | Description |
---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Input Parameter | Type | Default | Description |
---|---|---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Input Parameters | Type | Default | Description |
---|---|---|---|
Return Type | Description |
---|---|
Return Type | Description |
---|---|
url
str
None
The URL used to connect to Fiddler
org_id
str
None
The organization ID for a Fiddler instance. Can be found on the General tab of the Settings page.
auth_token
str
None
The authorization token used to authenticate with Fiddler. Can be found on the Credentials tab of the Settings page.
proxies
Optional [dict]
None
A dictionary containing proxy URLs.
verbose
Optional [bool]
False
If True, client calls will be logged verbosely.
verify
Optional [bool]
True
If False, client will allow self-signed SSL certificates from the Fiddler server environment. If True, the SSL certificates need to be signed by a certificate authority (CA).
list
A list containing the project ID string for each project
project_id
str
None
A unique identifier for the project. Must be a lowercase string between 2-30 characters containing only alphanumeric characters and underscores. Additionally, it must not start with a numeric character.
dict
A dictionary mapping project_name to the project ID string specified, once the project is successfully created.
project_id
str
None
The unique identifier for the project.
bool
A boolean denoting whether deletion was successful.
project_id
str
None
The unique identifier for the project.
list
A list containing the dataset ID strings for each project.
project_id
str
None
The unique identifier for the project.
dataset
dict
None
A dictionary mapping dataset slice names to pandas DataFrames.
dataset_id
str
None
A unique identifier for the dataset. Must be a lowercase string between 2-30 characters containing only alphanumeric characters and underscores. Additionally, it must not start with a numeric character.
info
Optional [fdl.DatasetInfo]
None
The Fiddler fdl.DatasetInfo() object used to describe the dataset.
size_check_enabled
Optional [bool]
True
If True, will issue a warning when a dataset has a large number of rows.
dict
A dictionary containing information about the uploaded dataset.
project_id
str
None
The unique identifier for the project.
dataset_id
str
None
A unique identifier for the dataset.
str
A message confirming that the dataset was deleted.
project_id
str
None
The unique identifier for the project.
dataset_id
str
None
A unique identifier for the dataset.
fdl.DatasetInfo
The fdl.DatasetInfo() object associated with the specified dataset.
project_id
str
None
The unique identifier for the project.
model_id
str
None
A unique identifier for the model. Must be a lowercase string between 2-30 characters containing only alphanumeric characters and underscores. Additionally, it must not start with a numeric character.
dataset_id
str
None
The unique identifier for the dataset.
model_info
fdl.ModelInfo
None
A fdl.ModelInfo() object containing information about the model.
str
A message confirming that the model was added.
project_id
str
None
The unique identifier for the project.
model_id
str
None
A unique identifier for the model.
model_dir
str
None
A path to the directory containing all of the model files needed to run the model.
deployment_params
Optional[fdl.DeploymentParams]
None
Deployment parameters object for tuning the model deployment spec. Supported from server version 23.1
and above with Model Deployment feature enabled.
project_id
str
None
A unique identifier for the project.
model_id
str
None
A unique identifier for the model.
deployment_params
Optional[fdl.DeploymentParams]
None
Deployment parameters object for tuning the model deployment spec.
None
Returns None
project_id
str
None
The unique identifier for the project.
model_id
str
None
A unique identifier for the model
project_id
str
None
The unique identifier for the project.
model_id
str
None
A unique identifier for the model. Must be a lowercase string between 2-30 characters containing only alphanumeric characters and underscores. Additionally, it must not start with a numeric character.
The ModelInfo object associated with the specified model.
project_id
str
None
The unique identifier for the project.
list
A list containing the string ID of each model.
project_id
str
None
The unique identifier for the project.
model_id
str
None
A unique identifier for the model.
model_dir
pathlib.Path
None
A path to the directory containing all of the model files needed to run the model.
force_pre_compute
bool
True
If True, re-run precomputation steps for the model. This can also be done manually by calling client.trigger_pre_computation.
bool
A boolean denoting whether the update was successful.
project_id
str
None
The unique identifier for the project.
model_id
str
None
A unique identifier for the model.
model_dir
str
None
A path to the directory containing all of the model files needed to run the model.
deployment_params
Optional[fdl.DeploymentParams]
None
Deployment parameters object for tuning the model deployment spec. Supported from server version 23.1
and above with Model Deployment feature enabled.
project_id
str
None
A unique identifier for the project.
model_id
str
None
A unique identifier for the model.
deployment_params
Optional[fdl.DeploymentParams]
None
Deployment parameters object for tuning the model deployment spec.
wait
Optional[bool]
True
Whether to wait for async job to finish(True) or return(False).
None
Returns None
project_id
str
None
The unique identifier for the project.
model_id
str
None
The unique identifier for the model.
dict
returns a dictionary, with all related fields for the model deployment
project_id
str
None
The unique identifier for the project.
model_id
str
None
The unique identifier for the model.
active
Optional [bool]
None
Set False
to scale down model deployment and True
to scale up.
replicas
Optional[int]
None
The number of replicas running the model.
cpu
Optional [int]
None
The amount of CPU (milli cpus) reserved per replica.
memory
Optional [int]
None
The amount of memory (mebibytes) reserved per replica.
wait
Optional[bool]
True
Whether to wait for the async job to finish (True
) or not (False
).
dict
returns a dictionary, with all related fields for model deployment
project_id
str
None
The unique identifier for the project.
model_id
str
None
A unique identifier for the model. Must be a lowercase string between 2-30 characters containing only alphanumeric characters and underscores. Additionally, it must not start with a numeric character.
event
dict
None
A dictionary mapping field names to field values. Any fields found that are not present in the model's ModelInfo object will be dropped from the event.
event_id
Optional [str]
None
A unique identifier for the event. If not specified, Fiddler will generate its own ID, which can be retrived using the get_slice API.
update_event
Optional [bool]
None
If True, will only modify an existing event, referenced by event_id. If no event is found, no change will take place.
event_timestamp
Optional [int]
None
The name of the timestamp input field for when the event took place. If no timestamp input is provided, the current time will be used.
casting_type
Optional [bool]
False
If True, will try to cast the data in event to be in line with the data types defined in the model's ModelInfo object.
dry_run
Optional [bool]
False
If True, the event will not be published, and instead a report will be generated with information about any problems with the event. Useful for debugging issues with event publishing.
str
returns a string with a UUID acknowledging that the event was successfully received.
project_id
str
None
The unique identifier for the project.
model_id
str
None
A unique identifier for the model.
batch_source
Union[pd.Dataframe, str]
None
Either a pandas DataFrame containing a batch of events, or the path to a file containing a batch of events. Supported file types are CSV (.csv) Parquet (.pq) Pickled DataFrame (.pkl)
id_field
Optional [str]
None
The field containing event IDs for events in the batch. If not specified, Fiddler will generate its own ID, which can be retrived using the get_slice API.
update_event
Optional [bool]
None
If True, will only modify an existing event, referenced by id_field. If an ID is provided for which there is no event, no change will take place.
timestamp_field
Optional [str]
None
The field containing timestamps for events in the batch. If no timestamp is provided for a given row, the current time will be used.
data_source
Optional [fdl.BatchPublishType]
None
The location of the data source provided. By default, Fiddler will try to infer the value. Can be one of - fdl.BatchPublishType.DATAFRAME - fdl.BatchPublishType.LOCAL_DISK - fdl.BatchPublishType.AWS_S3
casting_type
Optional [bool]
False
If True, will try to cast the data in event to be in line with the data types defined in the model's ModelInfo object.
credentials
Optional [dict]
None
A dictionary containing authorization information for AWS or GCP. For AWS, the expected keys are - 'aws_access_key_id' - 'aws_secret_access_key' - 'aws_session_token' For GCP, the expected keys are - 'gcs_access_key_id' - 'gcs_secret_access_key' - 'gcs_session_token'
group_by
Optional [str]
None
The field used to group events together when computing performance metrics (for ranking models only).
dict
A dictionary object which reports the result of the batch publication.
project_id
string
Yes
The unique identifier for the project
model_id
string
Yes
The unique identifier for the model
baseline_id
string
Yes
The unique identifier for the baseline
type
Yes
one of : PRE_PRODUCTION STATIC_PRODUCTION ROLLING_PRODUCTION
dataset_id
string
No
Training or validation dataset uploaded to Fiddler for a PRE_PRODUCTION baseline
start_time
int
No
seconds since epoch to be used as the start time for STATIC_PRODUCTION baseline
end_time
int
No
seconds since epoch to be used as the end time for STATIC_PRODUCTION baseline
offset
No
offset in seconds relative to the current time to be used for ROLLING_PRODUCTION baseline
window_size
No
width of the window in seconds to be used for ROLLING_PRODUCTION baseline
Baseline schema object with all the configuration parameters
project_id
string
Yes
The unique identifier for the project
model_id
string
Yes
The unique identifier for the model
baseline_id
string
Yes
The unique identifier for the baseline
Baseline schema object with all the configuration parameters
project_id
string
Yes
The unique identifier for the project
model_id
string
No
The unique identifier for the model
List of baseline config objects
project_id
string
Yes
The unique identifier for the project
model_id
string
Yes
The unique identifier for the model
baseline_id
string
Yes
The unique identifier for the baseline
config_info
dict
None
Monitoring config info for an entire org or a project or a model.
project_id
Optional [str]
None
The unique identifier for the project.
model_id
Optional [str]
None
The unique identifier for the model.
name
str
None
A name for the alert rule
project_id
str
None
The unique identifier for the project.
model_id
str
None
The unique identifier for the model.
alert_type
None
One of AlertType.PERFORMANCE
,
AlertType.DATA_DRIFT
,
AlertType.DATA_INTEGRITY
, AlertType.SERVICE_METRICS
, or
AlertType.STATISTIC
metric
None
When alert_type is AlertType.SERVICE_METRICS
this should be Metric.TRAFFIC
.
When alert_type is AlertType.PERFORMANCE
, choose one of the following based on the ML model task:
For binary_classfication:
Metric.ACCURACY
Metric.TPR
Metric.FPR
Metric.PRECISION
Metric.RECALL
Metric.F1_SCORE
Metric.ECE
Metric.AUC
For regression:
Metric.R2
Metric.MSE
Metric.MAE
Metric.MAPE
Metric.WMAPE
For multi-class classification:
Metric.ACCURACY
Metric.LOG_LOSS
For ranking:
Metric.MAP
Metric.MEAN_NDCG
When alert_type is AlertType.DATA_DRIFT
choose one of the following:
Metric.PSI
Metric.JSD
When alert_type is AlertType.DATA_INTEGRITY
choose one of the following:
Metric.RANGE_VIOLATION
Metric.MISSING_VALUE
Metric.TYPE_VIOLATION
When alert_type is AlertType.STATISTIC
choose one of the following:
Metric.AVERAGE
Metric.SUM
Metric.FREQUENCY
bin_size
ONE_DAY
Duration for which the metric value is calculated. Choose one of the following:
BinSize.ONE_HOUR
BinSize.ONE_DAY
BinSize.SEVEN_DAYS
compare_to
None
Whether the metric value compared against a static value or the same bin from a previous time period.
CompareTo.RAW_VALUE
CompareTo.TIME_PERIOD
.
compare_period
None
Required only when CompareTo
is TIME_PERIOD
. Choose one of the following: ComparePeriod.ONE_DAY
ComparePeriod.SEVEN_DAYS
ComparePeriod.ONE_MONTH
ComparePeriod.THREE_MONTHS
priority
None
Priority.LOW
Priority.MEDIUM
Priority.HIGH
warning_threshold
float
None
[Optional] Threshold value to crossing which a warning level severity alert will be triggered. This should be a decimal which represents a percentage (e.g. 0.45).
critical_threshold
float
None
Threshold value to crossing which a critical level severity alert will be triggered. This should be a decimal which represents a percentage (e.g. 0.45).
condition
None
Specifies if the rule should trigger if the metric is greater than or less than the thresholds. AlertCondition.LESSER
AlertCondition.GREATER
notifications_config
Dict[str, Dict[str, Any]]
None
[Optional] notifications config object created using helper method build_notifications_config()
columns
List[str]
None
Column names on which alert rule is to be created. Applicable only when alert_type is AlertType.DATA_INTEGRITY or AlertType.DRIFT. When alert type is AlertType.DATA_INTEGRITY, it can take [ANY] to check for all columns.
baseline_id
str
None
Name of the baseline whose histogram is compared against the same derived from current data. When no baseline_id is specified then the default baseline is used.
Used only when alert type is AlertType.DATA_DRIFT
.
segment
str
None
The segment to alert on. See Segments for more details.
Alert Rule
Created Alert Rule object
project_id
Optional [str]
None
A unique identifier for the project.
model_id
Optional [str]
None
A unique identifier for the model.
alert_type
Optional[fdl.AlertType]
None
Alert type. One of: AlertType.PERFORMANCE
, AlertType.DATA_DRIFT
, AlertType.DATA_INTEGRITY
, or AlertType.SERVICE_METRICS
metric
Optional[fdl.Metric]
None
When alert_type is SERVICE_METRICS: Metric.TRAFFIC
.
When alert_type is PERFORMANCE, choose one of the following based on the machine learning model.
1) For binary_classfication: One of
Metric.ACCURACY
, Metric.TPR
, Metric.FPR
, Metric.PRECISION
, Metric.RECALL
, Metric.F1_SCORE
, Metric.ECE
, Metric.AUC
2) For Regression: One of
Metric.R2
, Metric.MSE
, Metric.MAE
, Metric.MAPE
, Metric.WMAPE
3) For Multi-class:
Metric.ACCURACY
, Metric.LOG_LOSS
4) For Ranking:
Metric.MAP
, Metric.MEAN_NDCG
When alert_type is DRIFT:
Metric.PSI
or Metric.JSD
When alert_type is DATA_INTEGRITY:
One of
Metric.RANGE_VIOLATION
,
Metric.MISSING_VALUE
,
Metric.TYPE_VIOLATION
columns
Optional[List[str]]
None
[Optional] List of column names on which alert rule was created. Please note, Alert Rule matching any columns from this list will be returned.
offset
Optional[int]
None
Pointer to the starting of the page index
limit
Optional[int]
None
Number of records to be retrieved per page, also referred as page_size
ordering
Optional[List[str]]
None
List of Alert Rule fields to order by. Eg. [‘critical_threshold’] or [‘- critical_threshold’] for descending order.
List[AlertRule]
A List containing AlertRule objects returned by the query.
alert_rule_uuid
str
None
The unique system generated identifier for the alert rule.
start_time
Optional[datetime]
7 days ago
Start time to filter trigger alerts in yyyy-MM-dd format, inclusive.
end_time
Optional[datetime]
today
End time to filter trigger alerts in yyyy-MM-dd format, inclusive.
offset
Optional[int]
None
Pointer to the starting of the page index
limit
Optional[int]
None
Number of records to be retrieved per page, also referred as page_size
ordering
Optional[List[str]]
None
List of Alert Rule fields to order by. Eg. [‘alert_time_bucket’] or [‘- alert_time_bucket’] for descending order.
List[TriggeredAlerts]
A List containing TriggeredAlerts objects returned by the query.
alert_rule_uuid
str
None
The unique system generated identifier for the alert rule.
None
emails
Optional[str]
None
Comma separated emails list
pagerduty_services
Optional[str]
None
Comma separated pagerduty services list
pagerduty_severity
Optional[str]
None
Severity for the alerts triggered by pagerduty
webhooks
Optional[List[str]]
None
Comma separated valid uuids of webhooks available
Dict[str, Dict[str, Any]]:
dict with emails and pagerduty dict. If left unused, will store empty string for these values
name
str
None
A unique name for the webhook.
URL
str
None
The webhook url is used for sending notification messages.
provider
str
None
The platform provides webhooks functionality. Only ‘SLACK’ is supported.
Details of the webhook created.
uuid
str
None
The unique system generated identifier for the webook.
None
uuid
str
None
The unique system generated identifier for the webook.
Details of Webhook.
limit
Optional[int]
300
Number of records to be retrieved per page.
offset
Optional[int]
0
Pointer to the starting of the page index.
List[fdl.Webhook]
A List containing webhooks.
name
str
None
A unique name for the webhook.
url
str
None
The webhook url used for sending notification messages.
provider
str
None
The platform that provides webhooks functionality. Only ‘SLACK’ is supported.
uuid
str
None
The unique system generated identifier for the webook.
Details of Webhook after modification.
notification_status
bool
None
The status of notification for the alerts.
alert_config_ids
Optional[List[str]]
None
List of Alert Ids that we want to update.
model_id
Optional[str]
None
The Model Id for which we want to update all alerts.
List[AlertRule]
List of Alert Rules updated from this method.
metric_id
string
Yes
The unique identifier for the custom metric
fiddler.schema.custom_metric.CustomMetric
Custom metric object with details about the metric
project_id
string
Yes
The unique identifier for the project
model_id
string
Yes
The unique identifier for the model
limit
Optional[int]
300
No
Maximum number of items to return
offset
Optional[int]
0
No
Number of items to skip before returning
List[fiddler.schema.custom_metric.CustomMetric]
List of custom metric objects for the given model
name
string
Yes
Name of the custom metric
project_id
string
Yes
The unique identifier for the project
model_id
string
Yes
The unique identifier for the model
definition
string
Yes
The FQL metric definition for the custom metric
description
string
No
A description of the custom metric
metric_id
string
Yes
The unique identifier for the custom metric
segment_id
string
Yes
The unique identifier for the segment
fdl.Segment
Segment object with details about the segment
project_id
string
Yes
The unique identifier for the project
model_id
string
Yes
The unique identifier for the model
limit
Optional[int]
300
No
Maximum number of items to return
offset
Optional[int]
0
No
Number of items to skip before returning
List[fdl.Segment]
List of segment objects for the given model
name
string
Yes
Name of the segment
project_id
string
Yes
The unique identifier for the project
model_id
string
Yes
The unique identifier for the model
definition
string
Yes
The FQL metric definition for the segment
description
string
No
A description of the segment
segment_id
string
Yes
The unique identifier for the segment
project_id
str
None
A unique identifier for the project.
model_id
str
None
A unique identifier for the model.
input_df
pd.DataFrame
None
A pandas DataFrame containing model input vectors as rows.
chunk_size
Optional[int]
10000
The chunk size for fetching predictions. Default is 10_000 rows chunk.
pd.DataFrame
A pandas DataFrame containing model predictions for the given input vectors.
project_id
str
None
A unique identifier for the project.
model_id
str
None
A unique identifier for the model.
input_data_source
None
Type of data source for the input dataset to compute explanation on (RowDataSource, EventIdDataSource). A single row explanation is currently supported.
ref_data_source
Optional[Union[fdl.DatasetDataSource, fdl.SqlSliceQueryDataSource] ]
None
Type of data source for the reference data to compute explanation on (DatasetDataSource, SqlSliceQueryDataSource). Only used for non-text models and the following methods: 'SHAP', 'FIDDLER_SHAP', 'PERMUTE', 'MEAN_RESET'
explanation_type
Optional[str]
'FIDDLER_SHAP'
Explanation method name. Could be your custom explanation method or one of the following method: 'SHAP', 'FIDDLER_SHAP', 'IG', 'PERMUTE', 'MEAN_RESET', 'ZERO_RESET'
num_permutations
Optional[int]
300
- For Fiddler SHAP, num_permutations corresponds to the number of coalitions to sample to estimate the Shapley values of each single-reference game. - For the permutation algorithms, num_permutations corresponds to the number of permutations from the dataset to use for the computation.
ci_level
Optional[float]
0.95
The confidence level (between 0 and 1).
top_n_class
Optional[int]
None
For multi-class classification models only, specifying if only the n top classes are computed or all classes (when parameter is None).
tuple
A named tuple with the explanation results.
project_id
str
None
A unique identifier for the project.
model_id
str
None
A unique identifier for the model.
data_source
None
Type of data source for the input dataset to compute feature impact on (DatasetDataSource or SqlSliceQueryDataSource)
num_iterations
Optional[int]
10000
The maximum number of ablated model inferences per feature. Used for TABULAR data only.
num_refs
Optional[int]
10000
Number of reference points used in the explanation. Used for TABULAR data only.
ci_level
Optional[float]
0.95
The confidence level (between 0 and 1). Used for TABULAR data only.
output_columns
Optional[List[str]]
None
Only used for NLP (TEXT inputs) models. Output column names to compute feature impact on. Useful for Multi-class Classification models. If None, compute for all output columns.
min_support
Optional[int]
15
Only used for NLP (TEXT inputs) models. Specify a minimum support (number of times a specific word was present in the sample data) to retrieve top words. Default to 15.
overwrite_cache
Optional[bool]
False
Whether to overwrite the feature impact cached values or not.
tuple
A named tuple with the feature impact results.
project_id
str
None
A unique identifier for the project.
model_id
str
None
A unique identifier for the model.
data_source
None
Type of data source for the input dataset to compute feature importance on (DatasetDataSource or SqlSliceQueryDataSource)
num_iterations
Optional[int]
10000
The maximum number of ablated model inferences per feature.
num_refs
Optional[int]
10000
Number of reference points used in the explanation.
ci_level
Optional[float]
0.95
The confidence level (between 0 and 1).
overwrite_cache
Optional[bool]
False
Whether to overwrite the feature importance cached values or not
tuple
A named tuple with the feature impact results.
project_id
str
None
A unique identifier for the project.
dataset_id
str
None
A unique identifier for the dataset.
query
str
None
Slice query to compute Mutual information on.
column_name
str
None
Column name to compute mutual information with respect to all the columns in the dataset.
normalized
Optional[bool]
False
If set to True, it will compute Normalized Mutual Information.
num_samples
Optional[int]
10000
Number of samples to select for computation.
dict
A dictionary with the mutual information results.
sql_query
str
None
The SQL query used to retrieve the slice.
project_id
str
None
The unique identifier for the project. The model and/or the dataset to be queried within the project are designated in the sql_query itself.
columns_override
Optional [list]
None
A list of columns to include in the slice, even if they aren't specified in the query.
pd.DataFrame
A pandas DataFrame containing the slice returned by the query.
project_id
str
None
The unique identifier for the project.
model_id
str
None
The unique identifier for the model.
data_source
None
DataSource for the input dataset to compute fairness on (DatasetDataSource or SqlSliceQueryDataSource).
protected_features
list[str]
None
A list of protected features.
positive_outcome
Union[str, int, float, bool]
None
Value of the positive outcome (from the target column) for Fairness analysis.
score_threshold
Optional [float]
0.5
The score threshold used to calculate model outcomes.
dict
A dictionary containing fairness metric results.
dict
A dictionary of users and their roles in the organization.
project_id
str
None
The unique identifier for the project.
dict
A dictionary of users and their roles for the specified project.
dict
A dictionary containing information about teams and users.
project_id
str
None
The unique identifier for the project.
role
str
None
The permissions role being shared. Can be one of - 'READ' - 'WRITE' - 'OWNER'
user_name
Optional [str]
None
A username with which the project will be shared. Typically an email address.
team_name
Optional [str]
None
A team with which the project will be shared.
project_id
str
None
The unique identifier for the project.
role
str
None
The permissions role being revoked. Can be one of - 'READ' - 'WRITE' - 'OWNER'
user_name
Optional [str]
None
A username with which the project will be revoked. Typically an email address.
team_name
Optional [str]
None
A team with which the project will be revoked.
display_name
str
None
A display name for the dataset.
columns
list
None
A list of fdl.Column objects containing information about the columns.
files
Optional [list]
None
A list of strings pointing to CSV files to use.
dataset_id
Optional [str]
None
The unique identifier for the dataset
**kwargs
Additional arguments to be passed.
df
Union [pd.Dataframe, list]
Either a single pandas DataFrame or a list of DataFrames. If a list is given, all dataframes must have the same columns.
display_name
str
' '
A display_name for the dataset
max_inferred_cardinality
Optional [int]
100
If specified, any string column containing fewer than max_inferred_cardinality unique values will be converted to a categorical data type.
dataset_id
Optional [str]
None
The unique identifier for the dataset
fdl.DatasetInfo
A fdl.DatasetInfo() object constructed from the pandas Dataframe provided.
deserialized_json
dict
The dictionary object to be converted
fdl.DatasetInfo
A fdl.DatasetInfo() object constructed from the dictionary.
dict
A dictionary containing information from the fdl.DatasetInfo() object.
display_name
str
A display name for the model.
input_type
fdl.ModelInputType
A ModelInputType object containing the input type of the model.
model_task
fdl.ModelTask
A ModelTask object containing the model task.
inputs
list
A list of Column objects corresponding to the inputs (features) of the model.
outputs
list
A list of Column objects corresponding to the outputs (predictions) of the model.
metadata
Optional [list]
None
A list of Column objects corresponding to any metadata fields.
decisions
Optional [list]
None
A list of Column objects corresponding to any decision fields (post-prediction business decisions).
targets
Optional [list]
None
A list of Column objects corresponding to the targets (ground truth) of the model.
framework
Optional [str]
None
A string providing information about the software library and version used to train and run this model.
description
Optional [str]
None
A description of the model.
datasets
Optional [list]
None
A list of the dataset IDs used by the model.
mlflow_params
Optional [fdl.MLFlowParams]
None
A MLFlowParams object containing information about MLFlow parameters.
model_deployment_params
Optional [fdl.ModelDeploymentParams]
None
A ModelDeploymentParams object containing information about model deployment.
artifact_status
Optional [fdl.ArtifactStatus]
None
An ArtifactStatus object containing information about the model artifact.
preferred_explanation_method
Optional [fdl.ExplanationMethod]
None
An ExplanationMethod object that specifies the default explanation algorithm to use for the model.
custom_explanation_names
Optional [list]
[ ]
A list of names that can be passed to the explanation_name _argument of the optional user-defined _explain_custom method of the model object defined in package.py.
binary_classification_threshold
Optional [float]
.5
The threshold used for classifying inferences for binary classifiers.
ranking_top_k
Optional [int]
50
Used only for ranking models. Sets the top k results to take into consideration when computing performance metrics like MAP and NDCG.
group_by
Optional [str]
None
Used only for ranking models. The column by which to group events for certain performance metrics like MAP and NDCG.
fall_back
Optional [dict]
None
A dictionary mapping a column name to custom missing value encodings for that column.
target_class_order
Optional [list]
None
A list denoting the order of classes in the target. This parameter is required in the following cases:
- Binary classification tasks: If the target is of type string, you must tell Fiddler which class is considered the positive class for your output column. You need to provide a list with two elements. The 0th element by convention is considered the negative class, and the 1st element is considered the positive class. When your target is boolean, you don't need to specify this argument. By default Fiddler considers True
as the positive class. In case your target is numerical, you don't need to specify this argument, by default Fiddler considers the higher of the two possible values as the positive class.
- Multi-class classification tasks: You must tell Fiddler which class corresponds to which output by giving an ordered list of classes. This order should be the same as the order of the outputs.
- Ranking tasks: If the target is of type string, you must provide a list of all the possible target values in the order of relevance. The first element will be considered as the least relevant grade and the last element from the list will be considered the most relevant grade.
In the case your target is numerical, Fiddler considers the smallest value to be the least relevant grade and the biggest value from the list will be considered the most relevant grade.
**kwargs
Additional arguments to be passed.
dataset_info
The DatasetInfo object from which to construct the ModelInfo object.
target
str
The column to be used as the target (ground truth).
model_task
None
A ModelTask object containing the model task.
dataset_id
Optional [str]
None
The unique identifier for the dataset.
features
Optional [list]
None
A list of columns to be used as features.
custom_features
Optional[List[CustomFeature]]
None
List of Custom Features definitions for a model. Objects of type Multivariate, Vector, ImageEmbedding or TextEmbedding derived from CustomFeature can be provided.
metadata_cols
Optional [list]
None
A list of columns to be used as metadata fields.
decision_cols
Optional [list]
None
A list of columns to be used as decision fields.
display_name
Optional [str]
None
A display name for the model.
description
Optional [str]
None
A description of the model.
input_type
Optional [fdl.ModelInputType]
fdl.ModelInputType.TABULAR
A ModelInputType object containing the input type of the model.
outputs
Optional [list]
A list of Column objects corresponding to the outputs (predictions) of the model.
targets
Optional [list]
None
A list of Column objects corresponding to the targets (ground truth) of the model.
model_deployment_params
Optional [fdl.ModelDeploymentParams]
None
A ModelDeploymentParams object containing information about model deployment.
framework
Optional [str]
None
A string providing information about the software library and version used to train and run this model.
datasets
Optional [list]
None
A list of the dataset IDs used by the model.
mlflow_params
Optional [fdl.MLFlowParams]
None
A MLFlowParams object containing information about MLFlow parameters.
preferred_explanation_method
Optional [fdl.ExplanationMethod]
None
An ExplanationMethod object that specifies the default explanation algorithm to use for the model.
custom_explanation_names
Optional [list]
[ ]
A list of names that can be passed to the explanation_name _argument of the optional user-defined _explain_custom method of the model object defined in package.py.
binary_classification_threshold
Optional [float]
.5
The threshold used for classifying inferences for binary classifiers.
ranking_top_k
Optional [int]
50
Used only for ranking models. Sets the top k results to take into consideration when computing performance metrics like MAP and NDCG.
group_by
Optional [str]
None
Used only for ranking models. The column by which to group events for certain performance metrics like MAP and NDCG.
fall_back
Optional [dict]
None
A dictionary mapping a column name to custom missing value encodings for that column.
categorical_target_class_details
Optional [Union[list, int, str]]
None
A list denoting the order of classes in the target. This parameter is required in the following cases:
- Binary classification tasks: If the target is of type string, you must tell Fiddler which class is considered the positive class for your output column. If you provide a single element, it is considered the positive class. Alternatively, you can provide a list with two elements. The 0th element by convention is considered the negative class, and the 1st element is considered the positive class. When your target is boolean, you don't need to specify this argument. By default Fiddler considers True
as the positive class. In case your target is numerical, you don't need to specify this argument, by default Fiddler considers the higher of the two possible values as the positive class.
- Multi-class classification tasks: You must tell Fiddler which class corresponds to which output by giving an ordered list of classes. This order should be the same as the order of the outputs.
- Ranking tasks: If the target is of type string, you must provide a list of all the possible target values in the order of relevance. The first element will be considered as the least relevant grade and the last element from the list will be considered the most relevant grade.
In the case your target is numerical, Fiddler considers the smallest value to be the least relevant grade and the biggest value from the list will be considered the most relevant grade.
fdl.ModelInfo
A fdl.ModelInfo() object constructed from the fdl.DatasetInfo() object provided.
deserialized_json
dict
The dictionary object to be converted
fdl.ModelInfo
A fdl.ModelInfo() object constructed from the dictionary.
dict
A dictionary containing information from the fdl.ModelInfo() object.