Skip to content

Working with MLFlow models

View In Github

For this section, we will look at how to upload a model packaged using MLFlow.

The following steps describe how to upload a model packaged using MLFlow

Install MLFlow

You can follow the steps to install MLFlow from their

official documentation.

Train a model

Next step is to train the model in MLFlow and make sure a model artifact is saved to mlrun.

You save the model by calling log_model at the end of your training code.

For example, if you're training a sklearn model,

mlflow.sklearn.log_model(sk_model, "model")

The model will be saved to:

mlruns/0/<run-id>/artifacts/model/

Build a docker image

We can now use the saved artifact to build a docker image.

mlflow models build-docker -m runs:/<run-id>/model

This will create docker image: 'mlflow-pyfunc-servable'.

Tag this image so that it can be pushed to a Fiddler accessible docker registry,

Example:

docker tag mlflow-pyfunc-servable:latest manojcheenath/fiddler_examples


docker push manojcheenath/fiddler_examples

Upload model to Fiddler

The model can now be uploaded to Fiddler by calling register_model API.

deployment_options = DeploymentOptions(

    deployment_type='predictor',

    image_uri='manojcheenath/fiddler_examples:latest',

    port=8080,

)
result = fiddler_client.register_model(

   project_id,

   model_id,

   dataset_id,

   model_info,

   deployment_options,

)

You can find the complete tutorials here

Back to top