Bring Your Own Container

You can build and use a custom container (Bring Your Own Container or BYOC) for use when you create a job and job runs.

The maximum size of a container image that you can use with jobs is 25 GB. The size of the container image slows down the provisioning time for the job because of the container pull from the Container Registry. We recommend that you use the smallest container images possible.

To run the job, you have to make your own Dockerfile, and build an image. You start with a simple Dockerfile, that uses the Python slim image. The Dockerfile is designed so that you can make local and remote builds. You use the local build when you test locally against the code. During the local development, you don't need to build a new image for every code change.

Use the remote option to run the Dockerfile when you think the code is complete, and you want to run it as a job as in this Dockerfile example:

ARG type

FROM python:3.8-slim AS base


RUN python -m pip install \
        parse \

FROM base AS run-type-local
# nothing to see here

FROM base AS run-type-remote
CMD ["python", ""]

FROM run-type-${type} AS final

Following is a sample file:

import oci
import datetime
import os
import time
import sys
import uuid
from oci.loggingingestion import LoggingClient
from oci.loggingingestion.models import PutLogsDetails, LogEntryBatch, LogEntry


# switch
class Job:
    def __init__(self):
        rp_version = os.environ.get(OCI_RESOURCE_PRINCIPAL_VERSION, "UNDEFINED")
        if not rp_version or rp_version == "UNDEFINED":
            # RUN LOCAL TEST
            self.signer = oci.config.from_file("~/.oci/config", "BIGDATA")
            # RUN AS JOB
            self.signer = oci.auth.signers.get_resource_principals_signer()

job = Job()

    "Start logging for job run: {}".format(
        os.environ.get(JOB_RUN_OCID_KEY, "LOCAL")
print("Current timestamp in UTC: {}".format(str(datetime.datetime.utcnow())))

print("Delay 5s")


print("... another stdout")

print("Print all environment variables and values")

for item, value in os.environ.items():
    print("{}: {}".format(item, value))

print("Docker Job Done.")

We've provided several examples to help you get started.

Before you can push and pull images to and from Oracle Cloud Infrastructure Registry (also known as Container Registry), you must have an Oracle Cloud Infrastructure authorization token. You only see the auth token string when you create it, so be sure to copy the auth token to a secure location immediately.

  1. In the upper right corner of the Console, open the Profile menu and then click User Settings to view the details.
  2. On the Auth Tokens page, click Generate Token.
  3. Enter a friendly description for the auth token. Avoid entering confidential information.
  4. Click Generate Token. The new auth token is displayed.
  5. Copy the auth token immediately to a secure location where you can retrieve it later. You won't see the auth token again in the Console.
  6. Close the Generate Token dialog.
  7. Open a terminal window on your local machine.
  8. Sign in to the Container Registry so that you can build, run, test, tag, and push the container image.
    docker login -u '<tenant-namespace>/<username>' <region>
    Run locally

    Builds the docker file, but doesn't include the code for a quick run, debugging, and so on:

    docker build --build-arg type=local -t byoc .
    Run as a job

    Builds the docker file with the code in the it ready to run as job:

    docker build --build-arg type=remote -t byoc .
    Local testing with the code location mounted

    The API authorization key to the docker home user directory ./oci is mounted. If you use a different user in the docker image, you have to change the /home/datascience/.oci path. For example if you use a root account, you change the path to /root/.oci.

    docker run --rm -v $HOME/.oci:/home/datascience/.oci -v $PWD:/app byoc python /app/
    Local testing with the code is in the container

    Even when the container is built using the remote option to store the code in the image, for a local run the OCI API auth token is still required.

    docker run --rm -v $HOME/.oci:/home/datascience/.oci -v $PWD:/app byoc
  9. Enter and browse the container code
    docker run -it byoc sh
  10. Tag your local container image:
    docker login -u '<tenant-namespace>/<username>' <region>
    docker tag byoc:latest<region><tenancy-name>/byoc:1.0
  11. Push your container image:
    docker push <region><tenancy>/byoc:1.0
  12. Ensure that jobs has a policy for the resource principal to allow it to read the OCIR from the compartment where you stored the image.
    allow dynamic-group {<your-dynamic-group-name>} to read repos in compartment {<your-compartment-name>}
  13. (Optional) Sign the container image. This is only required if you're using image signature verification.
  14. (Optional) Ensure that jobs has a policy for the resource principal to let it to use the vault service from the compartment where the vault keys for the image signature are stored. The policy is needed only for image signature verification.
    Allow dynamic-group <dynamic-group-name> to use vaults in compartment <compartment-name>
    Allow dynamic-group <dynamic-group-name> to use keys in compartment <compartment-name>
    Allow dynamic-group <dynamic-group-name> to use secret-family in compartment <compartment-name>
  15. Choose one of the following options to create a job:
    • BYOC version 2: Create a job using the Environment configuration window pop-up. See step 9 in Creating a Job.
    • BYOC version 1: Create a job using the job environment variable, pointing to the location of the container image in your OCIR to run it as a job:

      BYOC version 1 is now deprecated and isn't the recommended method.