Model Deployment to Oracle Functions

In this section we demonstrate how a data scientist can deploy a machine learning model as an Oracle Function. Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, and serverless Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine.

Preparing the Model Artifact with Function Files

The required artifacts for building a function from an ADS model can be generated by using the prepare method with parameter fn_artifact_files_included=True. The function name can be set in the fn_name parameter. This will create the required artifacts to build a function. All the artifacts are generated inside the model artifact folder.

Following files are generated:

    + func.yaml
    + requirements.txt
    + runtime.yaml
    + model.onnx
    + (data-sample.json)
    + (schema.json)

The model artifact contains the following files:

  • The script contains fn specific handling of the input/output data.

  • The func.yaml script contains fn specific versioning information.

  • The requirements.txt file that contains a list of required packages for the model. This is auto-generated for the ADSModel.prepare() method, but if the user adds any preprocessing to the, they should include relevant packages in here.

  • The runtime.yaml file that documents the current notebook environment.

  • The model.onnx is an Onnx model that is a serialized version of your ADSModel object, but without any ADS dependencies.

  • The inference script used to perform model inference. A user can modify predict() to add any custom logic or data transformation, before or after the model estimator object inference endpoint has been called.

  • The data-sample.json file is optionally generated, and contains an example of the json formatting required by the handler in This is of course editable by the user.

  • The schema.json file is optionally generated, and contains metadata about the data input (feature types, columns names, example output, etc.)

The requirements.txt file will contain the libraries required by the core estimator. The version numbers provided will be the ones that are compatible with the ADS environment.


Suppose the trained model object is lr_model, the prepare method can be called as follows:

from ads.common.model import ADSModel
ADS_lr_model = ADSModel.from_estimator(lr_model)
model_artifact_fn = ADS_lr_model.prepare(os.path.join('/', 'home', 'datascience', 'modelfn'),
                        force_overwrite=True, X_sample=test.X, y_sample=test.y,

ADS will create, in the model artifact directory, four files that are needed for the deployment of the model to Oracle Functions. The files are described below:

File: func.yaml

The file func.yaml defines the runtime environment of the Function. The default memory size is 1024 MB. It is recommended to run the Function with at least 512 MB of memory. The runtime environment is python 3.6. This is the same as the notebook environment.

entrypoint: /python/bin/fdk /function/ handler
memory: 1024
name: predictor
runtime: python3.6
schema_version: 20180708
source: /predictor
type: http
version: 0.0.1


The fine defines the handler function that will be call at each invocation of the function. The model is loaded as a global variable then a call is made to the predict() function inside of the handler.


It is assumed that the data is in JSON format. Arrays should be stored under the input key. Please modify the handler function if the data is in a different structure.

import io
import json

from fdk import response
import sys
import score

model = score.load_model()

def handler(ctx, data: io.BytesIO=None):

    input = json.loads(data.getvalue())['input']
    prediction = score.predict(input, model)

    return response.Response(
        ctx, response_data=json.dumps(prediction),
        headers={"Content-Type": "application/json"}


import json
import numpy as np
import onnxruntime as rt
import os
import pandas as pd
from sklearn.preprocessing import LabelEncoder

model_name = 'model.onnx'
transformer_name = 'onnx_data_transformer.json'

Inference script. This script is used for prediction by scoring server when schema is known.

def load_model(model_file_name=model_name):
    Loads model from the serialized format

    model:  an onnxruntime session instance
    model_dir = os.path.dirname(os.path.realpath(__file__))
    contents = os.listdir(model_dir)
    if model_file_name in contents:
        return rt.InferenceSession(os.path.join(model_dir, model_file_name))
        raise Exception('{0} is not found in model directory {1}'.format(model_file_name, model_dir))

def predict(data, model=load_model()):
    Returns prediction given the model and data to predict

    model: Model session instance returned by load_model API
    data: Data format as expected by the onnxruntime API

    predictions: Output from scoring server
        Format: {'prediction':output from model.predict method}

    from pandas import read_json, DataFrame
    from io import StringIO
    X = read_json(StringIO(data)) if isinstance(data, str) else DataFrame.from_dict(data)
    model_dir = os.path.dirname(os.path.realpath(__file__))
    contents = os.listdir(model_dir)
    # Note: User may need to edit this
    if transformer_name in contents:
        onnx_data_transformer = ONNXTransformer()
        onnx_data_transformer.reload_serialized(os.path.join(model_dir, transformer_name))
        X, _ = onnx_data_transformer.transform(X)
        onnx_data_transformer = None

    onnx_transformed_rows = []
    for name, row in X.iterrows():
    input_data = {'input': onnx_transformed_rows}

    pred =, input_data)
    return {'prediction':pred[0].tolist()}
  • cat /home/datascience/model-artifact/requirements.txt

Customizing the Generated Artifacts

The function is deployed as a REST endpoint. The default implementation assumes content type application/json and JSON format as:

    "input":list of list of features

If the JSON format is changed, the following line, in, must be modified:

input = json.loads(data.getvalue())['input'] assumes that the input data does not need any processing and is in the format expected by the predict function of the estimator. If the data requires pre-processing before passing to predict method of the estimator, it can be added to

The default behavior of scorefn.predict is to return data in JSON format.

    "prediction":output from predict API of the estimator

If you change the return type of scorefn.predict from JSON to any other type, you will have to modify the following line in func.handler -

return response.Response(
        ctx, response_data=json.dumps(prediction),
        headers={"Content-Type": "application/json"}

Any libraries that need to be installed can be added to requirements.txt. By default numpy, pandas, scipy and fdk are included. It will also contain the libraries required by the estimator. Any thirdr-party libraries, in your custom code, will have to add to requirements.txt

Deploying with fn in Cloud Shell

Cloudshell is a web-based terminal accessible from the Oracle Cloud Console. It is free to use, and the Oracle CLI, Functions and Docker are pre-installed. To access Cloudshell, log into the Oracle Cloud Console and go to the top right corner. Cloud Shell comes with 5GB of persistent storage for the home directory, so you can make changes to your home directory and then continue working on your project when you come back to Cloud Shell. You can read more about Cloudshell here. You can also go through the quick start guide for Python of Fn to find out more information about Functions.

Which Models Work?

Oracle fn is compatible with models that has only open source dependencies and is able to build the docker image with, for example, sklearn, keras and tensorflow.

Configuring the Tenancy to Work with Functions

This guide is meant to provide high-level details of how to deploy the model with Function. Oracle Functions is a service-managed serverless Function-as-a-service (FaaS) platform. The Configuring Your Tenancy for Function Development page have detailed steps on how to configure your tenancy to work with Functions. It is highly recommended that you complete this first before working with the following steps.

Download the Model Artifact

In Cloud Shell, create a folder that you want to download the model artifact. Then you go to that directory and run

oci data-science model get-artifact-content -–model-id <model_ocid> --file <>

<> is what you want to name the downloaded artifact file. And <model_ocid> the model OCID value that you can find from the Oracle Cloud Infrastructure console or by listing models in ADS. For details on listing models in model catalog, see section Model Catalog.

The downloaded artifact file is a zip file. You need to unzip this file.

unzip <>

Set up fn CLI in Cloud Shell

(1) Create an application

Log into the Oracle Cloud Infrastructure Console page, and in the tenancy and region where you want to host your function, go to Developers Services/Functions. Create an Application and select the correct vcn and subnet that you have created for your Functions in the previous Configuring the Tenancy to Work with Functions step. Check this page for more information on setting up the VCN and subnet for functions.

You can verify your setup by listing Function applications in your compartment in Cloud Shell. You should be able to see the application just created.

fn list apps

(2) Select Cloud Shell Setup

You will see the list of Application that have created on the Functions/Applications page in Oracle Cloud Infrastructure console. Click on the name of the Application you want to use. Then, click on the Resources/Getting Started tab on the left panel and choose the Cloud Shell Setup option. After you have selected the option, you will see a list of step-by-step instructions on how to set up the fn CLI, along with the commands that you can copy and paste into your Cloud Shell terminal to run.

(3) Use the context for your region

In your Cloud Shell terminal, you can find out the context available to you using the command

fn list contexts

The output will be similar to

CURRENT NAME          PROVIDER   API URL                                         REGISTRY
        default       oracle-cs
    *   us-ashburn-1  oracle-cs

The * indicates the current context. To select a different context for your region, run:

fn use context <oci-region-name>

<oci-region-name> is the name of the region where your Function will be deployed. For example, us-ashburn-1.

In the Cloud Shell setup instruction, your <oci-region-name> is already populated in the instruction command. You can simply copy and paste the command into your terminal and run.

(4) Update the context with the compartment id you want to use to deploy your Function.

fn update context oracle.compartment-id <compartment-ocid>

<compartment-ocid> is the OCID of the compartment to which you will deploy your Function later.

The <compartment-ocid> of the compartment you are currently using will be populated in the instruction.

(5) Update the context with the location of the Oracle Cloud Infrastructure Registry you want to use

fn update context registry <region-key><object-storage-namespace>/<repo-name>

Your <object-storage-namespace> and <region-key> are populated in the command in the step-by-step instruction.

<repo-name> is the repository name you want to push your image to. You can specify the name of the repo you want to use. If the repo has not yet been created, it will be created for you after executing the command.

<object-storage-namespace> is the auto-generated Object Storage namespace string of the tenancy where your repositories are created (as shown on the Oracle Cloud Infrastructure Tenancy Information page).

<region-key> is the key of the Oracle Cloud Infrastructure Registry region where your repositories are created. For example, <region-key> for us-ashburn-1 is iad.

(6) Generate an AuthToken to enable login to Oracle Cloud Infrastructure Registry

You need to create an auth token to enable login to Regsitry(OCIR). It is very easy and can be accomplished in the Oracle Cloud Infrastructure console. The link to create the token is included in the step-by-step instruction. You can visit this link for additional information about auth token.

(7) Log into OCIR using the Auth Token as your password

docker login -u ‘<object-storage-namespace>/<user-name>’ <region-key>

The <object-storage-namespace>, <user-name> and <region-key> are already populated in the instruction command.

When prompted to enter password, use the Auth Token you have generated.

Deploy Function image

In Cloud Shell, go into the directory where the model artifact was unzipped and then change to the fn-model sub-directory. You will use the deploy command which will build your image, push it to the repo in OCIR and deploy the Function.

Run the following command to deploy the function:

fn --verbose deploy --app <my-app>

<my-app> is the name of the application created in the last step.

Deployment Verification

You can inspect if the function is packaged successfully with the app using fn inspect command

fn inspect function <my-app> <my-function>

<my-function> is the name of the Function image just deployed. This is specified in the func.yaml file in the fn-model sub-directory.

You should see an output similar to the following.

    "annotations": {
            "": "<ocid>/actions/invoke",
            "": "<ocid>",
            "": "<ocid>"
    "app_id": "<ocid>",
    "created_at": "2020-04-22T21:36:53.775Z",
    "id": "<ocid>",
    "idle_timeout": 30,
    "image": "<tenancy-name>/<my-app>/predictor:0.0.2",
    "memory": 1024,
    "name": "predictor",
    "timeout": 30,
    "updated_at": "2020-04-22T21:36:53.775Z"


Note that the REST endpoint is provided by and each endpoint is unique.

Invoke Function

You can pass a JSON payload to your Function using a simple cat command (remember to use the input convention consistent with the file as described in Customizing the Generated Artifacts).

For example below we pass a list of feature vectors stored in the file payload.json to <my-function>. Do not forget to specify the content type which is application/json in most cases:

cat payload.json | fn invoke <my-app> <my-function> --content-type application/json

Sample payload.json:

    "input": [

Sample output:

    "prediction": [1.0]

Using Oracle Functions as a Backend Source of the Oracle Cloud Infrastructure API Gateway

Set up Tenancy for API Gateway Development

Before using the API Gateway service to create API gateways and deploy APIs on them, you have to complete the following steps to set up your tenancy for API gateway development.

  • Create groups and users to use API Gateway

  • Create compartments to own network resources and API Gateway resources in the tenancy

  • Create a VCN to use with API Gateway

  • Create policies to control access to network and API Gateway-related resources</a>

More detailed information can be found in Preparing for API Gateway and Configuring Your Tenancy for API Gateway Development.

Creating an API Gateway

From the Oracle Cloud Infrastructure console under Developer Services/API Gateway, click Create Gateway. Enter the name of the compartment you want to use for the API Gatway and select the VCN and subnet.

Deploying Your Function as a Backend to API Gateway

  1. From Developer Services/API Gateway in the Oracle Cloud Infrastructure Console, select the compartment containing your API Gateway. Click on the API gateway you want to use. Select Resources/Deployments from left panel, and then click Create Deployment.

  2. In Basic Information of Create Deployment, specify the Name, Path Prefix, and Compartment for deploying your API. Then click Next.

  3. In Routes setup, specify HTTP METHOD and endpoint root path, and also select Oracle Functions as TYPE. Then select your Function Application and the Function. Click Next and then Create.

  4. Copy your API Gateway deployment URL from Oracle Cloud Infrastructure console page and append to it the function endpoint path you defined in previous step, for example (/predict).

  5. You can make a POST Call in the Cloud Shell Terminal and pass in the payload.

    curl -k -X POST <API-deployment-endpoint>/<path> -d @payload.json --header "Content-type:application/json"

    Sample output:

    {"prediction": [1.0]}
  6. You can track metrics such as the number of requests and latency of the API Gateway under the Resources/Metrics tab on the left panel of Oracle Cloud Infrastructure console page.