Registers a model with Fiddler and uploads a model artifact to be used for explainability and fairness capabilities.
Deprecated
This client method is being deprecated and will not be supported in future versions of the client. Please use client.add_model_artifact() going forward.
For more information, see Uploading a Model Artifact.
Input Parameter | Type | Default | Description |
---|---|---|---|
project_id | str | None | The unique identifier for the project. |
model_id | str | None | A unique identifier for the model. |
artifact_path | pathlib.Path | None | A path to the directory containing all of the model files needed to run the model. |
deployment_type | Optional [str] | 'predictor' | The type of deployment for the model. Can be one of 'predictor' — Just a predict endpoint is exposed. 'executor' — The model's internals are exposed. |
image_uri | Optional [str] | None | A URI of the form '/:'. If specified, the image will be used to create a new runtime to serve the model. |
namespace | Optional [str] | 'default' | The Kubernetes namespace to use for the newly created runtime. image_uri must be specified. |
port | Optional [int] | 5100 | The port to use for the newly created runtime. image_uri must be specified. |
replicas | Optional [int] | 1 | The number of replicas running the model. image_uri must be specified. |
cpus | Optional [int] | 0.25 | The number of CPU cores reserved per replica. image_uri must be specified. |
memory | Optional [str] | '128m' | The amount of memory reserved per replica. image_uri must be specified. |
gpus | Optional [int] | 0 | The number of GPU cores reserved per replica. image_uri must be specified. |
await_deployment | Optional [bool] | True | If True, will block until deployment completes. |
import pathlib
PROJECT_ID = 'example_project'
MODEL_ID = 'example_model'
artifact_path = pathlib.Path('model_dir')
client.upload_model_package(
artifact_path=artifact_path,
project_id=PROJECT_ID,
model_id=MODEL_ID
)