11.5 About Compute Resource

The term Compute Resource refers to services such as a database, or any other backend service to which an interpreter connects.

Note:

You must have the Administrator role to access the Compute Resources page.

The Compute Resources page displays the list of compute resources along with the name of each resource, its type, comments, and last updated details. To view details of each Compute Resource, click the Compute Resource name. The connection details are displayed on the Oracle Resources page.

11.5.1 Oracle Resource

The Oracle Resource page displays the details of the selected compute resource on the Compute Resources page. You can configure the memory settings (in Gigabytes) for the Python interpreter for the selected compute resource.

Note:

You must have Administrator privilege to configure the memory settings.
To manage memory settings for the interpreter:
  1. Name—Displays the name of the selected resource.
  2. Comment—Displays comment, if any.
  3. Memory—You can configure memory settings (in Gigabytes) for the interpreter in this field. The interpreter supports Markdown, Python, SQL, Script, and R languages.
    • For the resource databasename_gpu, the memory settings (in Gigabytes) must be between 8 and 200. The memory setting for gpu configures the amount of host RAM that the interpreter container can use. The GPU VRAM is not configurable and the container has access to all GPU memory available. For NVIDIA A10 Tensor Core GPUs, it is 24GB.
    • For the resource databasename_high, the memory settings (in Gigabytes) must be between 8 and 96.
    • For the resource databasename_medium, the memory settings (in Gigabytes) must be between 4 and 8.
    • For the resource databasename_low, the memory settings (in Gigabytes) must be between 2 and 4.

    Note:

    The Memory setting is applicable only for the Python interpreter.
  4. Connection Type—Displays the database connection of the resource.
  5. Network Alias—Displays the alias of the network connection.

11.5.1.1 Resource Services and Notebooks

This topic lists the number of notebooks that you can run concurrently per Autonomous Database instance for each Resource service.

The Resource Services and Number of Notebooks table lists the Compute Resources assigned for running at different Resource Service levels - GPU, High, Medium and Low. The GPU compute capability applies only to the Python interpreter.

Table 11-2 Resource Services and Number of Notebooks

Resource Service OCPUs (Oracle CPUs), ECPUs and GPUs Memory Number of Concurrent Notebooks, UDFs
GPU

Note:

The GPU setting includes a HIGH setting on the database server side.
1 NVIDIA A10 Tensor Core 8 GB (DDR4), by default. Extensible up to 200 GB The number of concurrent notebooks you can run is determined by:
  • The GPU resources of the region where your ADB instance is deployed, and
  • The number of GPU resources available at the time you run the notebooks

If GPU resources are not available when requested, you will receive an error message. You should try again later.

Note:

GPU resources are available only on paid Oracle Autonomous Database Serverless. GPU resources are not available if less than 16 ECPUs are allocated for OML.
High Up to 8 OCPUs 8 GB (up to 16 GB) Up to 3
Medium Up to 4 (OCPUs) 4 GB (up to 8 GB) Up to max (1.25 × number of OCPUs)

Note:

The number of current notebooks run is calculated by the formula 1.25 x (number of OCPUs) provisioned for the corresponding Autonomous Database instance. OCPU stands for Oracle CPU.

For example, if a database is provisioned with 4 OCPUs, then the maximum number of notebooks run would be 5 (1.25 x 4) in Medium level.

Low 1 2 GB (up to 4 GB) Up to 100
TP

This service is available for Oracle Autonomous Transaction Processing (ATP) database.

User specified 2 GB Up to 60
TPURGENT

This service is available for Oracle Autonomous Transaction Processing (ATP) database.

User specified 2 GB. Up to 60
ECPU setting. OML apps on ADB-Serverless have ECPU specifications separate from the database. User specified
  • Low— 2GB
  • Medium—4GB
  • High—8GB
  • GPU—8GB (default). Can be extended upto 200GB by the Admin.

This allocation is based on the assumption that one VM is allocated for the PDB.

All processes share the CPU resources. Running of UDFs is situation-specific.
  • If you are performing data processing at the compute level, you may require more memory depending on your data size.

    Note:

    The Admin can allocate more memory in Oracle Resource.
  • If Low resource level is sufficient, you may be able to run approximately 60 UDFs concurrently.
  • If High resource level is required, you may be able to run approximately 16 UDFs concurrently.

For more information on Database service and concurrency, see Database Service Names for Autonomous Database