fdl.DeploymentParams

Represents the deployment parameters for a model

Supported from server version 23.1 and above with Model Deployment feature enabled.

Input ParameterTypeDefaultDescription
image_uriOptional[str]md-base/python/machine-learning:1.0.1Reference to the docker image to create a new runtime to serve the model.

Check the available images on the Model Deployment page.
replicasOptional[int]1The number of replicas running the model.

Minimum value: 1
Maximum value: 10
Default value: 1
memoryOptional[int]256The amount of memory (mebibytes) reserved per replica.

Minimum value: 150
Maximum value: 16384 (16GiB)
Default value: 256
cpuOptional[int]100The amount of CPU (milli cpus) reserved per replica.

Minimum value: 10
Maximum value: 4000 (4vCPUs)
Default value: 100
deployment_params = fdl.DeploymentParams(
        image_uri="md-base/python/machine-learning:1.1.0",
        cpu=250,
        memory=512,
  		  replicas=1,
)

📘

What parameters should I set for my model?

Setting the right parameters might not be straightforward and Fiddler is here to help you.

The parameters might vary depending the number of input features used, the pre-processing steps used and the model itself.

This table is helping you defining the right parameters

  1. Surrogate Models guide
Number of input featuresMemory (mebibytes)CPU (milli cpus)
< 10250 (default)100 (default)
< 20400300
< 50600400
<100850900
<20016001200
<30020001200
<40028001300
<50029001500
  1. User Uploaded guide

For uploading your artifact model, refer to the table above and increase the memory number, depending on your model framework and complexity. Surrogate models use lightgbm framework.

For example, an NLP model for a TEXT input might need memory set at 1024 or higher and CPU at 1000.

📘

Usage Reference

See the usage with:

Check more about the Model Deployment feature set.