Billing Autonomous AI Database on Dedicated Exadata Infrastructure

Oracle Autonomous AI Database on Dedicated Exadata Infrastructure uses specific algorithms to allocate and bill for usage of the compute used by Autonomous AI Databases. Understanding these algorithms can help you determine how best to create and configure your Autonomous AI Databases to meet performance goals in the most cost-effective fashion.

CPU Billing Details

Oracle Autonomous AI Database on Dedicated Exadata Infrastructure computes CPU billing as follows:

  1. CPU usage for each Autonomous AI Database is measured each second in units of whole ECPU or OCPU.

    a. A stopped Autonomous AI Database uses zero ECPU or OCPU. When an Autonomous AI Database is stopped, you are not billed.

    b. A running Autonomous AI Database uses its allocated number of ECPUs or OCPUs plus any additional ECPUs or OCPUs due to auto-scaling. When an Autonomous AI Database is running, you are billed for the number of CPUs currently allocated to the database, whether specified at initial creation or later by a manual scaling operation. Additionally, if auto-scaling is enabled for the database, you are billed for each second for any additional CPUs the database is using as the result of being automatically scaled up.

    Note: Creating AVMC and ACD resources does not initiate billing. So, even though you assign a total CPU count to an AVMC and each ACD consumes 8 ECPUs or 2 OCPUs per node when created, these CPUs are not billed. Only once you provision Autonomous AI Databases in an AVMC and an underlying ACD, and that database is actively running, will the CPUs used be billed. As a result, you can create ACDs within AVMCs to organize and group your databases according to your lines of business, functional areas, or some other technique without worrying about incurring costs.

    c. When you create an Autonomous AI Database, by default Oracle reserves additional CPUs to ensure that the database can run with at least 50% capacity even in case of any node failures. You can change the percentage of CPUs reserved across nodes to 0% or 25% while provisioning an ACD. See Node failover reservation in Create an Autonomous Container Database for instructions. These additional CPUs are not included in the billing.

    Note: Autonomous AI Database on Dedicated Exadata Infrastructure on Oracle Database@AWS supports ECPU compute model only.

  2. The per-second measurements are averaged across each hour interval for each Autonomous AI Database.

  3. The per-hour averages for the Autonomous AI Databases are added together to determine the CPU usage per hour across the entire Autonomous VM Cluster resource.

Autonomous AI Database on Dedicated Exadata Infrastructure database compute costs are aggregated and reported at the AVMC level, covering all active Autonomous AI Databases across all ACDs in the AVMC. OCI Cost Analysis can provide the AVMC’s usage and cost.

To estimate the cost per Autonomous AI Database, sum the total ECPUs across the Autonomous AI Databases and allocate cost based on each Autonomous AI Database’s share of total CPU consumption. For example:

If the AVMC reports 1500 ECPUs billed for a billing period and three Autonomous AI Databases are active; Database A with 10 ECPUs, Database B with 20 ECPUs, and Database C with 30 ECPUs. The cost split is:

This assumes fixed CPU sizes with no auto-scaling, and all 3 Autonomous AI Databases were running during the billing period. For greater accuracy, use the ECPUs Allocated metric to capture actual ECPU usage per Autonomous AI Database.

It is recommended that you see:

Elastic Pool Billing

An elastic pool allows you to consolidate your Autonomous AI Database instances in terms of their compute resource billing.

You can think of an elastic pool like a mobile phone service “family plan,” except this applies to your Autonomous AI Database instances. Instead of paying individually for each database, the databases are grouped into a pool in which one instance, the leader, is charged for the compute usage associated with the entire pool. See Consolidate Autonomous AI Database Instances Using Elastic Pools for complete details about elastic resource pools.

Elastic resource pool usage:

Using an elastic pool, you can provision up to four times the number of ECPUs over your selected pool size, and you can provision database instances that are in the elastic pool with as little as 1 ECPU per database instance. Outside of an elastic pool the minimum number of ECPUs per database instance is 2 ECPUs. For example, with a pool size of 128, you can provision 512 Autonomous AI Database instances (when each instance has 1 ECPU). In this example, you are billed for the pool size compute resources, based on the pool size of 128 ECPUs, while you have access to 512 Autonomous AI Database instances. In contrast, when you individually provision 512 Autonomous AI Database instances without using an elastic pool, you must allocate a minimum of 2 ECPUs for each Autonomous AI Database instance. In this example, you would pay for 1024 ECPUs. Using an elastic pool provides up to 87% compute cost savings.

After creating an elastic pool, the total ECPU usage for a given hour is charged to the Autonomous AI Database instance, that is the pool leader. Except for the pool leader, individual Autonomous AI Database instances that are pool members are not charged for ECPU usage while they are members of an elastic pool.

Elastic pool billing is as follows:

For more details, see How to Achieve up to 87% Compute Cost Savings with Elastic Resource Pools on an Autonomous AI Database.

Elastic Pool Billing when a Pool is Created or Terminated

When an elastic pool is created or terminated, the leader is billed for the entire hour for the elastic pool. In addition, individual instances that are either added or removed from the pool are billed for any compute usage that occurs while the instance is not in the elastic pool (in this case, the billing applies to the individual Autonomous AI Database instance).

Elastic Pool Billing when a Pool Member or Leader Leaves the Pool

Billing for an Autonomous AI Database instance that leaves an elastic pool returns to individual instance billing based on the compute resources that the individual instance uses:

Related Content