Preparing a Model Artifact

After you have trained a model, you create the model artifact to save with your model in a model catalog. This creates centralized storage of model artifacts and enables you to track model metadata.

A model artifact is a ZIP archive of the files necessary to deploy your model as a model deployment or load it back in a notebook session.

We have provided various model catalog examples that include model artifacts for a variety of machine learning frameworks and model formats. We have examples for ONNX , Scikit-learn, Keras, PyTorch, LightGBM, and XGBoost models. Get started by obtaining our model artifact template, which includes these files:

File Description Contains your custom logic for loading serialized model objects to memory, and define an inference endpoint (predict()).
runtime.yaml Provides instructions about which conda environment to use when deploying the model using a Data Science model deployment. Gives you a series of step-by-step instructions to prepare and save a model artifact to the model catalog. We highly recommend that you follow these steps.
artifact-introspection-test/requirements.txt Lists the third-party dependencies that you must install in your local environment before running introspection tests.
artifact-introspection-test/ Provides an optional series of test definitions that you can run on your model artifact before saving it to the model catalog. These model introspection tests capture many of the most common errors when preparing a model artifact.

The model artifact directory structure should match this example:

|-- runtime.yaml

More Python modules that are imported in Any code used for inference should be zipped at the same level as or any level below the file. If any required files are present at folder levels above the file, they are ignored and could result in deployment failure.