client.upload_model_package

Registers a model with Fiddler and uploads a model artifact to be used for explainability and fairness capabilities.

🚧

Deprecated

This client method is being deprecated and will not be supported in future versions of the client. Please use client.add_model_artifact() going forward.

For more information, see Uploading a Model Artifact.

Input ParameterTypeDefaultDescription
project_idstrNoneThe unique identifier for the project.
model_idstrNoneA unique identifier for the model.
artifact_pathpathlib.PathNoneA path to the directory containing all of the model files needed to run the model.
deployment_typeOptional ramet'predictor'The type of deployment for the model. Can be one of
'predictor' — Just a predict endpoint is exposed.
'executor' — The model's internals are exposed.
image_uriOptional rametNoneA URI of the form '/:'. If specified, the image will be used to create a new runtime to serve the model.
namespaceOptional ramet'default'The Kubernetes namespace to use for the newly created runtime. image_uri must be specified.
portOptional ramet5100The port to use for the newly created runtime. image_uri must be specified.
replicasOptional ramet1The number of replicas running the model. image_uri must be specified.
cpusOptional ramet0.25The number of CPU cores reserved per replica. image_uri must be specified.
memoryOptional ramet'128m'The amount of memory reserved per replica. image_uri must be specified.
gpusOptional ramet0The number of GPU cores reserved per replica. image_uri must be specified.
await_deploymentOptional rameteTrueIf True, will block until deployment completes.
import pathlib

PROJECT_ID = 'example_project'
MODEL_ID = 'example_model'

artifact_path = pathlib.Path('model_dir')

client.upload_model_package(
    artifact_path=artifact_path,
    project_id=PROJECT_ID,
    model_id=MODEL_ID
)