About Exadata Cloud at Customer Instances

Exadata System Configuration

Oracle Database Exadata Cloud at Customer is offered in the following starter system configurations:

  • Base System: Containing two compute nodes and three Exadata Storage Servers.

    Previously known as an Eighth Rack, a Base System is an entry-level configuration that contains Exadata Storage Servers with significantly less storage capacity and compute nodes with significantly less memory and processing power than other configurations.

  • Quarter Rack: Containing two compute nodes and three Exadata Storage Servers.

  • Half Rack: Containing four compute nodes and six Exadata Storage Servers.

  • Full Rack: Containing eight compute nodes and 12 Exadata Storage Servers.

Exadata Cloud at Customer system configurations are all based on Oracle Exadata X6 or Oracle Exadata X7 systems.

Each starter system configuration is equipped with a fixed amount of memory, storage, and network resources. However, you can choose how many compute node (database server) CPU cores are enabled. This choice enables you to scale an Exadata Cloud at Customer configuration to meet workload demands and only pay for the processing power that you require. Each database server must contain the same number of enabled CPU cores.

Starter System Specifications

The following table outlines the technical specifications for each Exadata Cloud at Customer system configuration based on Oracle Exadata X6 hardware.

Specification Eighth Rack Quarter Rack Half Rack Full Rack

Number of Compute Nodes

2

2

4

8

— Total Maximum Number of Enabled CPU Cores

68

84

168

336

— Total RAM Capacity

480 GB

1440 GB

2880 GB

5760 GB

Number of Exadata Storage Servers

3

3

6

12

— Total Raw Flash Storage Capacity

19.2 TB

38.4 TB

76.8 TB

153.6 TB

— Total Raw Disk Storage Capacity

144 TB

288 TB

576 TB

1152 TB

— Total Usable Storage Capacity

42 TB

84 TB

168 TB

336 TB

The following table outlines the technical specifications for each Exadata Cloud at Customer system configuration based on Oracle Exadata X7 hardware.

Specification Base System Quarter Rack Half Rack Full Rack

Number of Compute Nodes

2

2

4

8

— Total Maximum Number of Enabled CPU Cores

44

92

184

368

— Total RAM Capacity

480 GB

1440 GB

2880 GB

5760 GB

Number of Exadata Storage Servers

3

3

6

12

— Total Raw Flash Storage Capacity

19.2 TB

76.8 TB

153.6 TB

307.2 TB

— Total Raw Disk Storage Capacity

144 TB

360 TB

720 TB

1440 TB

— Total Usable Storage Capacity

42.7 TB

106.9 TB

213.8 TB

427.6 TB

Elastic System Configuration

With Exadata Cloud at Customer, you can optionally add compute nodes or Exadata Storage Servers to create custom configurations. This option enables much greater flexibility so that you can scale your Exadata Cloud at Customer configuration to accommodate your workload requirements.

This capability, known as elastic scaling, is subject to the following conditions:

  • For elastic scaling of Exadata Storage Servers:
    • Each Exadata Cloud at Customer system configuration can have an absolute maximum of 12 Exadata Storage Servers.
    • The Exadata Cloud at Customer system configuration must be based on Oracle Exadata X7 hardware or Oracle Exadata X6 hardware.
    • The Exadata Cloud at Customer service instance must be enabled to support multiple virtual machine (VM) clusters.
  • For elastic scaling of Exadata compute nodes:
    • Each Exadata Cloud at Customer system configuration can have an absolute maximum of eight compute nodes. However, the practical maximum is more limited if you don't have enough free IP addresses for the additional compute nodes.

      Specifically, if your system is configured so that each VM cluster client network subnet is defined using a /28 CIDR block (N.N.N.N/28) and each VM cluster backup network subnet is defined using a /29 CIDR block, then your immediate expansion potential is limited to five compute servers. In such cases, expansion to more than five compute servers requires the redeployment of Exadata Cloud at Customer, which includes deleting and re-creating all of the VM clusters and database deployments on the system.

    • The Exadata Cloud at Customer system configuration must be based on Oracle Exadata X7 hardware. You cannot add compute nodes to a starter system configuration based on Oracle Exadata X6 hardware.
    • The Exadata Cloud at Customer service instance must be enabled to support multiple virtual machine (VM) clusters.

The following table outlines the key additional resources provided by each compute node that is added using elastic scaling.

Specification Exadata Base System Compute Node X7 Exadata Compute Node X7

Maximum Number of Additional CPU Cores

22

46

Additional RAM Capacity

240 GB

720 GB

The following table outlines the key additional resources provided by each Exadata Storage Server that is added using elastic scaling.

Specification Exadata Storage Server X6 Exadata Base System Storage Server X7 Exadata Storage Server X7

Additional Raw Flash Storage Capacity

12.8 TB

6.4 TB

25.6 TB

Additional Raw Disk Storage Capacity

96 TB

48 TB

120 TB

Additional Usable Storage Capacity

28 TB

14 TB

35.6 TB

Exadata Storage Configuration

As part of configuring each Oracle Database Exadata Cloud at Customer instance, the storage space inside the Exadata Storage Servers is configured for use by Oracle Automatic Storage Management (ASM). By default, the following ASM disk groups are created:

  • The DATA disk group is primarily intended for the storage of Oracle Database data files.

  • The RECO disk group is primarily used for storing the Fast Recovery Area (FRA), which is an area of storage where Oracle Database can create and manage various files related to backup and recovery, such as RMAN backups and archived redo log files.

In addition, you can optionally create the SPARSE disk group. The SPARSE disk group is required to support Exadata Cloud at Customer snapshots. Exadata snapshots enable space-efficient clones of Oracle databases that can be created and destroyed very quickly and easily. Snapshot clones are often used for development, testing, or other purposes that require a transient database.

For Exadata Cloud at Customer instances that are based on Oracle Exadata X6 hardware, there are additional system disk groups that support various operational purposes. The DBFS disk group is primarily used to store the shared Oracle Clusterware files (Oracle Cluster Registry and voting disks), while the ACFS disk groups underpin shared file systems that are used to store software binaries (and patches) and files associated with the cloud-specific tooling that resides on your Exadata Cloud at Customer compute nodes. You must not remove or disable any of the system disk groups or related ACFS file systems. You should not store your own data, including Oracle Database data files or backups, inside the system disk groups or related ACFS file systems. Compared to the other disk groups, the system disk groups are so small that they are typically ignored when discussing the overall storage capacity.

For Exadata Cloud at Customer instances that are based on Oracle Exadata X7 hardware, there are no additional system disk groups. On such instances, a small amount of space is allocated from the DATA disk group to support the shared file systems that are used to store software binaries (and patches) and files associated with the cloud-specific tooling. You should not store your own data, including Oracle Database data files or backups, inside the system related ACFS file systems.

Although the disk groups are commonly referred to as DATA, RECO and so on, the ASM disk group names contain a short identifier string that is associated with your Exadata Database Machine environment. For example, the identifier could be C2, in which case the DATA disk group would be named DATAC2, the RECO disk group would be named RECOC2, and so on.

As an input to the configuration process, you must make decisions that determine how storage space in the Exadata Storage Servers is allocated to the ASM disk groups:

  • Database backups on Exadata Storage — select this configuration option if you intend to perform database backups to the Exadata storage within your Exadata Cloud at Customer environment. If you select this option more space is allocated to the RECO disk group, which is used to store backups on Exadata storage. If you do not select this option, more space is allocated to the DATA disk group, which enables you to store more information in your databases.

    Note:

    Take care when setting this option. Depending on your situation, you may have limited options for adjusting the space allocation after the storage in configured.
  • Create sparse disk group? — select this configuration option if you intend to use snapshot functionality within your Exadata Cloud at Customer environment. If you select this option the SPARSE disk group is created, which enables you to use Exadata Cloud at Customer snapshot functionality. If you do not select this option, the SPARSE disk group is not created and Exadata Cloud at Customer snapshot functionality will not be available on any database deployments that are created in the environment.

    Note:

    Take care when setting this option. You cannot later enable Exadata Cloud at Customer snapshot functionality if you do not select the option to create the SPARSE disk group.

The following table outlines the proportional allocation of storage amongst the DATA, RECO, and SPARSE disk groups for each possible configuration:

Configuration settings DATA disk group RECO disk group SPARSE disk group

Database backups on Exadata Storage: No

Create sparse disk group?: No

80 %

20 %

0 %

The SPARSE disk group is not created.

Database backups on Exadata Storage: Yes

Create sparse disk group?: No

40 %

60 %

0 %

The SPARSE disk group is not created.

Database backups on Exadata Storage: No

Create sparse disk group?: Yes

60 %

20 %

20 %

Database backups on Exadata Storage: Yes

Create sparse disk group?: Yes

35 %

50 %

15 %

Multiple VM Clusters

If your Exadata system environment is enabled to support multiple virtual machine (VM) clusters, then you can define up to 8 clusters and specify how the overall Exadata system resources are allocated to them.

In a configuration with multiple VM clusters, each VM cluster is allocated a portion of the overall Exadata system resources. By default, the resource allocation is dedicated to the VM cluster with no over-provisioning or resource sharing. On the compute nodes, a separate VM is defined for each VM cluster, and each VM is allocated a dedicated portion of the available compute node memory and local disk resources. Compute node CPU resources can be dedicated to each VM cluster, or you may choose to implement CPU oversubscription to increase the utilization of the compute node CPU resources. Each VM cluster is also allocated a dedicated portion of the overall Exadata storage.

Network isolation between VM clusters is also implemented. For each VM cluster, the client network and backup network each use a dedicated IP subnet.

VM cluster networks are isolated from each other and all other VM Clusters at layer 2 (Ethernet). Isolation is implemented within the Oracle Cloud at Customer network switches, which ensures that network traffic (including data) for each VM cluster is separated from other VM clusters. This additional capability provides enhanced security for databases running on VM Clusters. If you require a direct network connection between VM clusters, to enable database links between databases in different VM clusters for example, then you must submit a service request to Oracle to enable the connection.

Configuration of multiple VM clusters is only possible on environments where it is required when creating an Exadata Cloud at Customer instance. See Creating an Exadata Cloud at Customer Instance. If you are not prompted to configure a shape for the first VM cluster when you create an Exadata Cloud at Customer instance, then the Exadata system supporting the service instance is not equipped to support multiple VM clusters.

On Exadata Cloud at Customer environments where support for multiple VM clusters is enabled, you must specify the following attributes to configure the resources that are allocated to each VM cluster:

  • Cluster Name — specifies the name that is used to identify the cluster.

  • Database backups on Exadata Storage — this option configures the Exadata storage to enable local database backups on Exadata storage.

  • Create sparse disk group? — this option creates a disk group that is based on sparse grid disks. You must select this option to enable Exadata Cloud at Customer snapshots. Exadata snapshots enable space-efficient clones of Oracle databases that can be created and destroyed very quickly and easily.

  • Database backups on ZDLRA — this option enables database backups on Oracle Zero Data Loss Recovery Appliance (ZDLRA) storage. If you do not select this option then you will not be able to select ZDLRA as a backup location when you configure a database deployment on the cluster.

  • Exadata Storage (TB) — specifies the total amount of Exadata storage (in TB) that is allocated to the VM cluster. This storage is allocated evenly from all of the Exadata Storage Servers. You must specify a value greater than 3 TB and up to the amount of remaining unallocated Exadata storage space.

  • CPU Cores per Node — specifies the number of CPU cores that are allocated to each active node in the VM cluster. You must specify a value greater than 2 and up to the number of remaining unallocated CPU cores.

  • Memory (GB) per Node — specifies the amount of memory (in GB) that is allocated to each active node in the VM cluster. You must specify a value greater than 30 GB and up to the amount of remaining unallocated memory, and you should factor in any plans for more VM clusters.

    Note:

    Take care when specifying the memory allocation because:
    • After the VM cluster is created, you cannot decrease the memory allocation; however, you may increase the memory allocation by using unallocated memory.
    • You cannot create another VM cluster unless there is 30 GB of remaining unallocated memory. In that case, you would need to delete an existing VM cluster before you can create another one.
  • Local Storage (GB) — specifies the amount of local disk storage (in GB) that is allocated to each active node in the VM cluster. You must specify a value greater than 60 GB and up to the amount of remaining unallocated local storage space, and you should factor in any plans for more VM clusters.

    Note:

    Take care when specifying the local disk storage because:
    • In addition to the storage specified in this attribute, each VM cluster requires 137 GB of local disk storage to support software images for the VM cluster. Consequently, the minimum amount of local disk storage consumed by a VM cluster is 197 GB (137 GB + 60 GB).
    • For Exadata Cloud at Customer configurations based on Oracle Exadata X7 systems, the total amount of local disk storage that can be allocated to VM clusters is 1237 GB. For Exadata Cloud at Customer configurations based on Oracle Exadata X6 systems, the total amount of local disk storage that can be allocated to VM clusters is 483 GB by default, or up to 1237 GB on systems with upgraded local disk storage.
    • After the VM cluster is created, you cannot modify the amount of local storage.
    • If all of the local disk storage is allocated, or if there is not at least 197 GB of remaining unallocated local disk storage, then you cannot create another VM cluster. In that case, you must delete an existing VM cluster before you can create another one.

When you create a VM cluster, you must also specify the compute nodes that are part of the cluster. A cluster must contain at least one active node, and may contain any number of active nodes up to the capacity of the Exadata Cloud at Customer instance. See Creating a VM Cluster. You can also add compute nodes to an existing VM cluster or remove compute nodes from an existing VM cluster. See Modifying an Existing VM Cluster.

CPU Oversubscription

CPU oversubscription is a feature that works in conjunction with multiple virtual machine (VM) clusters to increase the overall utilization of your compute node CPU resources.

CPU oversubscription enables you to allocate more virtual compute node CPU cores to your VM clusters than the number of physical CPU cores that are enabled in the service instance. With CPU oversubscription enabled, the total number of virtual CPU cores that are available for allocation to the VM clusters is two times the number of enabled physical CPU cores, and the CPU core allocation for each individual VM cluster is limited to the number of enabled physical CPU cores.

When you provision an Exadata Cloud at Customer instance you can choose to enable CPU oversubscription. See Creating an Exadata Cloud at Customer Instance. You can also enable CPU oversubscription on an existing an Exadata Cloud at Customer instance. See Modifying the Compute Node Processing Power. However, note that you cannot disable CPU oversubscription after it is enabled.

By using CPU oversubscription, you can better utilize compute node CPU resources when some VM clusters are busy but others are not. However, CPU oversubscription forces physical CPU resources to be time-shared during busy periods.

For example, consider an Exadata Cloud at Customer instance with 24 CPU cores enabled:

  • Without CPU oversubscription, you can only allocate up to 24 CPU cores to all of the VM clusters running on the instance.

    Now consider if the Exadata Cloud at Customer instance supports 2 VM clusters, with each cluster allocated 12 CPU cores. In this case:

    • When one cluster is busy and the other cluster is idle, then the busy one can only use its allocation of 12 CPU cores while the remaining CPU resources remain idle.

    • When both clusters are busy, then they can use their dedicated CPU allocations with no impact on each other.

  • With CPU oversubscription enabled, you can allocate up to 48 virtual CPU cores across all of the VM clusters running on the instance. In this case, each individual VM cluster is limited to 24 virtual CPU cores.

    Now consider if the Exadata Cloud at Customer instance supports 2 VM clusters, with each cluster allocated 24 virtual CPU cores. In this case:

    • If one cluster is busy and the other cluster is idle, then the busy one can benefit by using its virtual CPU allocation to use all of the 24 available physical CPU cores.

    • If both clusters are busy, then both must share the 24 physical CPU cores.