Managing VM Clusters on Exadata Cloud@Customer
Learn to manage virtual machine (VM) clusters on Exadata Cloud@Customer.
- About Managing VM Clusters on Exadata Cloud@Customer
The VM cluster provides a link between your Exadata Cloud@Customer infrastructure and Oracle Database. - Required IAM Policy for Managing VM Clusters
Review the identity access management (IAM) policy for managing virtual machine (VM) clusters on Oracle Exadata Cloud@Customer Systems. - Prerequisites for VM Clusters on Exadata Cloud@Customer
To connect to the VM cluster compute node, you use an SSH public key. - Using the Console for VM Clusters on Exadata Cloud@Customer
Learn how to use the console to create, edit, download a configuration file, validate, and terminate your infrastructure network, and manage your infrastructure for Oracle Exadata Cloud@Customer. - Using the API for VM Clusters on Exadata Cloud@Customer
Review the list of API calls to manage your Exadata Cloud@Customer VM cluster networks and VM clusters. - Introduction to Scale Up or Scale Down Operations
With the Multiple VMs per Exadata system (MultiVM) feature release, you can scale up or scale down your VM cluster resources.
About Managing VM Clusters on Exadata Cloud@Customer
The VM cluster provides a link between your Exadata Cloud@Customer infrastructure and Oracle Database.
Before you can create any databases on your Exadata Cloud@Customer infrastructure, you must create a VM cluster network, and you must associate it with a VM cluster. Each Exadata Cloud@Customer infrastructure deployment can support eight VM cluster networks and associated VM clusters.
The VM cluster network specifies network resources, such as IP addresses and host names, that reside in your corporate data center and are allocated to Exadata Cloud@Customer. The VM cluster network includes definitions for the Exadata client network and the Exadata backup network. The client network and backup network contain the network interfaces that you use to connect to the VM cluster compute nodes, and ultimately the databases that reside on those compute nodes.
The VM cluster provides a link between your Exadata Cloud@Customer infrastructure Oracle Databases you deploy. The VM cluster contains an installation of Oracle Clusterware, which supports databases in the cluster. In the VM cluster definition, you also specify the number of enabled CPU cores, which determines the amount of CPU resources that are available to your databases.
Avoid entering confidential information when assigning descriptions, tags, or friendly names to your cloud resources through the Oracle Cloud Infrastructure Console, API, or CLI.
Parent topic: Managing VM Clusters on Exadata Cloud@Customer
Required IAM Policy for Managing VM Clusters
Review the identity access management (IAM) policy for managing virtual machine (VM) clusters on Oracle Exadata Cloud@Customer Systems.
A policy is an IAM document that specifies who has what type of access to your resources. It is used in different ways: to mean an individual statement written in the policy language; to mean a collection of statements in a single, named "policy" document (which has an Oracle Cloud ID (OCID) assigned to it); and to mean the overall body of policies your organization uses to control access to resources.
A compartment is a collection of related resources that can be accessed only by certain groups that have been given permission by an administrator in your organization.
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy written by an administrator, whether you're using the Console, or the REST API with a software development kit (SDK), a command-line interface (CLI), or some other tool. If you try to perform an action, and receive a message that you don’t have permission, or are unauthorized, then confirm with your administrator the type of access you've been granted, and which compartment you should work in.
For administrators: The policy in "Let database admins manage DB systems" lets the specified group do everything with databases, and related database resources.
If you're new to policies, then see "Getting Started with Policies" and "Common Policies". If you want to dig deeper into writing policies for databases, then see "Details for the Database Service".
Prerequisites for VM Clusters on Exadata Cloud@Customer
To connect to the VM cluster compute node, you use an SSH public key.
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAA....lo/gKMLVM2xzc1xJr/Hc26biw3TXWGEakrK1OQ== rsa-key-20160304
Related Topics
Parent topic: Managing VM Clusters on Exadata Cloud@Customer
Using the Console for VM Clusters on Exadata Cloud@Customer
Learn how to use the console to create, edit, download a configuration file, validate, and terminate your infrastructure network, and manage your infrastructure for Oracle Exadata Cloud@Customer.
- Using the Console to Create a VM Cluster Network
To create your VM cluster network with the Console, be prepared to provide values for the fields required for configuring the infrastructure. - Using the Console to Edit a VM Cluster Network
You can only edit a VM cluster network that is not associated with a VM cluster. - Using the Console to Download a File Containing the VM Cluster Network Configuration Details
To provide VM cluster network information to your network administrator, you can download and supply a file containing the network configuration. - Using the Console to Validate a VM Cluster Network
You can only validate a VM cluster network if its current state is Requires Validation, and if the underlying Exadata infrastructure is activated. - Using the Console to Terminate a VM Cluster Network
Before you can terminate a VM cluster network, you must first terminate the associated VM cluster, if one exists, and all the databases it contains. - Using the Console to Create a VM Cluster
To create your VM cluster, be prepared to provide values for the fields required for configuring the infrastructure. - Using the Console to Scale the Resources on a VM Cluster
Starting in Exadata Cloud@Customer Gen2, you can scale up or down multiple resources at the same time. You can also scale up or down resources one at a time. - Using the Console to Stop, Start, or Reboot a VM Cluster Compute Node
Use the console or API calls to stop, start, or reboot a compute node. - Using the Console to Check the Status of a VM Cluster Compute Node
Review the health status of a VM cluster compute node. - Using the Console to Update the License Type on a VM Cluster
To modify licensing, be prepared to provide values for the fields required for modifying the licensing information. - Using the Console to Move a VM Cluster to Another Compartment
To change the compartment that contains your VM cluster on Exadata Cloud@Customer, use this procedure. - Using the Console to Terminate a VM cluster
Before you can terminate a VM cluster, you must first terminate the databases that it contains.
Parent topic: Managing VM Clusters on Exadata Cloud@Customer
Using the Console to Create a VM Cluster Network
To create your VM cluster network with the Console, be prepared to provide values for the fields required for configuring the infrastructure.
Related Topics
Using the Console to Edit a VM Cluster Network
You can only edit a VM cluster network that is not associated with a VM cluster.
Using the Console to Download a File Containing the VM Cluster Network Configuration Details
To provide VM cluster network information to your network administrator, you can download and supply a file containing the network configuration.
Using the Console to Validate a VM Cluster Network
You can only validate a VM cluster network if its current state is Requires Validation, and if the underlying Exadata infrastructure is activated.
Using the Console to Terminate a VM Cluster Network
Before you can terminate a VM cluster network, you must first terminate the associated VM cluster, if one exists, and all the databases it contains.
Terminating a VM cluster network removes it from the Cloud Control Plane.
Using the Console to Create a VM Cluster
To create your VM cluster, be prepared to provide values for the fields required for configuring the infrastructure.
To create a VM cluster, ensure that you that have:
- Active Exadata infrastructure available to host the VM cluster.
- A validated VM cluster network available for the VM cluster to use.
Using the Console to Scale the Resources on a VM Cluster
Starting in Exadata Cloud@Customer Gen2, you can scale up or down multiple resources at the same time. You can also scale up or down resources one at a time.
- Use Case 1: If you have allocated all of the resources to one virtual machine, and if you want to create multiple virtual machines, then there wouldn't be any resources available to allocate the new virtual machine. So scale down the resources as needed before you can create any additional virtual machines.
- Use Case 2: If you want to allocate different resources based on the workload, then scale down or scale up accordingly. For example, you may want to run nightly batch jobs for reporting/ETL and scale down the VM once the job is over.
How long it takes to scale down the resources?
- OCPU
- Memory
- Local storage
- Exadata storage
Each individual operation can take approximately 15 minutes and all the operations run in a series if multiple scale down is executed. For example, scale down Memory and Local Storage from the Console. In general, local storage and memory scale down takes more time than the other two.
Using the Console to Stop, Start, or Reboot a VM Cluster Compute Node
Use the console or API calls to stop, start, or reboot a compute node.
Using the Console to Check the Status of a VM Cluster Compute Node
Review the health status of a VM cluster compute node.
Using the Console to Update the License Type on a VM Cluster
To modify licensing, be prepared to provide values for the fields required for modifying the licensing information.
Using the Console to Move a VM Cluster to Another Compartment
To change the compartment that contains your VM cluster on Exadata Cloud@Customer, use this procedure.
When you move a VM cluster, the compartment change is also applied to the compute nodes and databases that are associated with the VM cluster. However, the compartment change does not affect any other associated resources, such as the Exadata infrastructure, which remains in its current compartment.
Using the API for VM Clusters on Exadata Cloud@Customer
Review the list of API calls to manage your Exadata Cloud@Customer VM cluster networks and VM clusters.
For information about using the API and signing requests, see "REST APIs" and "Security Credentials". For information about SDKs, see "Software Development Kits and Command Line Interface".
Use these API operations to manage Exadata Cloud@Customer VM cluster networks and VM clusters:
GenerateRecommendedVmClusterNetwork
CreateVmClusterNetwork
DeleteVmClusterNetwork
GetVmClusterNetwork
ListVmClusterNetwork
UpdateVmClusterNetwork
ValidateVmClusterNetwork
CreateVmCluster
DeleteVmCluster
GetVmCluster
ListVmCluster
UpdateVmCluster
For the complete list of APIs, see "Database Service API".
Related Topics
- REST APIs
- Security Credentials
- Software Development Kits and Command Line Interface
- GenerateRecommendedVmClusterNetwork
- CreateVmClusterNetwork
- DeleteVmClusterNetwork
- GetVmClusterNetwork
- ListVmClusterNetwork
- UpdateVmClusterNetwork
- ValidateVmClusterNetwork
- CreateVmCluster
- DeleteVmCluster
- GetVmCluster
- ListVmCluster
- UpdateVmCluster
- Database Service API
Parent topic: Managing VM Clusters on Exadata Cloud@Customer
Introduction to Scale Up or Scale Down Operations
With the Multiple VMs per Exadata system (MultiVM) feature release, you can scale up or scale down your VM cluster resources.
- Scaling Up or Scaling Down the VM Cluster Resources
- Calculating the Minimum Required Memory
- Calculating the ASM Storage
- Estimating How Much Local Storage You Can Provision to Your VMs
- Scaling Local Storage Down
Parent topic: Managing VM Clusters on Exadata Cloud@Customer
Scaling Up or Scaling Down the VM Cluster Resources
You can scale up or scale down the memory, local disk size
(/u02
), ASM Storage, and CPUs. Scaling up or down of these
resources requires thorough auditing of existing usage and capacity management by the
customer DB administrator. Review the existing usage to avoid failures during or after a
scale down operation. While scaling up, consider how much of these resources are left
for the next VM cluster you are planning to create. Exadata Cloud@Customer Cloud tooling
calculates the current usage of memory, local disk, and ASM storage in the VM cluster,
adds headroom to it, and arrives at a "minimum" value below which you cannot scale down,
and expects that you specify the value below this minimum value.
For memory and /u02
scale up or scale down
operations, if the difference between the current value and the new value is less
than 2%, then no change will be made to that VM. This is because memory change
involves rebooting the VM, and /u02
change involves bringing
down the Oracle Grid Infrastructure stack and un-mounting /u02
.
Productions customers will not resize for such a small increase or decrease, and
hence such requests are a no-op.
Parent topic: Introduction to Scale Up or Scale Down Operations
Calculating the Minimum Required Memory
Cloud tooling provides dbaasapi
to identify the minimum
required memory. As root
user, you have to run
dbaasapi
and pass a JSON file with sample content
as follows. The only parameter that you need to update in the
input.json
is new_mem_size
,
which is the new memory to which you want the VM Cluster to be re-sized.
Copy# cat input.json
{
"object": "db",
"action": "get",
"operation": "precheck_memory_resize",
"params": {
"dbname": "grid",
"new_mem_size" : "30 gb",
"infofile": "/tmp/result.json"
},
"outputfile": "/tmp/info.out",
"FLAGS": ""
}
# dbaasapi -i input.json
# cat /tmp/result.json
{
"is_new_mem_sz_allowed" : 0,
"min_req_mem" : 167
}
The result indicates that 30 GB is not sufficient and the minimum required memory is 167 GB, and that is the maximum you can reshape down to. On a safer side, you must choose a value greater than 167 GB, as there could be fluctuations of that order between this calculation and the next reshape attempt.
Parent topic: Introduction to Scale Up or Scale Down Operations
Calculating the ASM Storage
Use the following formula to calculate the minimum required ASM storage:
- For each disk group, for example,
DATA
,RECO
, note the total size and free size by running theasmcmd lsdg
command on anydomU
of the VM cluster. - Calculate the used size as (Total size - Free size) / 3 for each disk group. The /3 is used because the disk groups are triple mirrored.
-
DATA:RECO ratio is:
80:20 if Local Backups option was NOT selected in the user interface.
40:60 if Local Backups option was selected in the user interface.
- Ensure that the new total size as given in the user interface passes
the following conditions:
Used size for DATA * 1.15 <= (New Total size * DATA % )
Used size for RECO * 1.15 <= (New Total size * RECO % )
Example 4-1 Calculating the ASM Storage
- Run the
asmcmd lsdg
command in thedomU
:- Without
SPARSE:
[root@scaqak01dv0305 ~]# /u01/app/19.0.0.0/grid/bin/asmcmd lsdg ASMCMD> State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED HIGH N 512 512 4096 4194304 12591936 10426224 1399104 3009040 0 Y DATAC5/ MOUNTED HIGH N 512 512 4096 4194304 3135456 3036336 348384 895984 0 N RECOC5/ ASMCMD>
- With
SPARSE:
[root@scaqak01dv0305 ~]# /u01/app/19.0.0.0/grid/bin/asmcmd lsdg ASMCMD> State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED HIGH N 512 512 4096 4194304 12591936 10426224 1399104 3009040 0 Y DATAC5/ MOUNTED HIGH N 512 512 4096 4194304 3135456 3036336 348384 895984 0 N RECOC5/ MOUNTED HIGH N 512 512 4096 4194304 31354560 31354500 3483840 8959840 0 N SPRC5/ ASMCMD>
Note
The listed values of all attributes for SPARSE diskgroup (SPRC5) present the virtual size. In Exadata DB Systems and Exadata Cloud@Customer, we use the ratio of 1:10 for
physicalSize
:virtualSize
. Hence, for all purposes of our calculation we must use 1/10th of the values displayed above in case of SPARSE for those attributes. - Without
SPARSE:
- Used size for a disk group = (Total_MB - Free_MB) /3
- Without SPARSE:
Used size for DATAC5 = (12591936 - 10426224 ) / 3 = 704.98 GB
Used size for RECO5 = (3135456 - 3036336 ) / 3 = 32.26 GB
- With SPARSE:
Used size for DATAC5 = (12591936 - 10426224 ) / 3 ~= 704.98 GB
Used size for RECO5 = (3135456 - 3036336 ) /3 ~= 32.26 GB
Used size for SPC5 = (1/10 * (31354560 - 31354500)) / 3 ~= 0 GB
- Without SPARSE:
- Storage distribution among diskgroups
- Without SPARSE:
DATA:RECO ratio is 80:20 in this example.
- With SPARSE:
DATA RECO: SPARSE ratio is 60:20:20 in this example.
- Without SPARSE:
- New requested size should pass the following conditions:
- Without SPARSE: (For example, 5 TB in user interface.)
5 TB = 5120 GB ; 5120 *.8 = 4096 GB; 5120 *.2 = 1024 GB
For DATA: (704.98 * 1.15 ) <= 4096 GB
For RECO: (32.36 * 1.15) <= 1024 GB
- With SPARSE: (For example, 8 TB in the user interface.)
8 TB = 8192 GB; 8192 *.6 = 4915 GB; 8192 *.2 = 1638 GB; 8192 *.2 = 1638 GB
For DATA: (704.98 * 1.15 ) <= 4915 GB
For RECO: (32.36 * 1.15) <= 1638 GB
For SPR: (0 * 1.15) <= 1638 GB
- Without SPARSE: (For example, 5 TB in user interface.)
Above resize will go through. If above conditions are not met by the new size, then resize will fail the precheck.
Parent topic: Introduction to Scale Up or Scale Down Operations
Estimating How Much Local Storage You Can Provision to Your VMs
X8-2 and X7-2 Systems
You specify how much space is provisioned from local storage to each VM. This
space is mounted at location /u02
, and is used primarily for Oracle
Database homes. The amount of local storage available will vary with the number of
virtual machines running on each physical node, as each VM requires a fixed amount of
storage (137 GB) for the root file systems, GI homes, and diagnostic log space. Refer to
the table below to see the maximum amount of space available to provision to local
storage (/u02
) across all VMs.
Total space available to all VMs on an ExaCC X7 node is 1237 GB. Total space available to all VMs on a ExaCC X8 database node is 1037 GB.
Table 4-1 Space allocated to VMs
#VMs | Space Consumed by VM Image or GI | X8-2 Space for ALL /u02 (GB) | X7-2 Space for ALL /u02 (GB) |
---|---|---|---|
1 |
137 |
900 |
1100 |
2 |
274 |
763 |
963 |
3 |
411 |
626 |
826 |
4 |
548 |
489 |
689 |
5 |
685 |
352 |
552 |
6 |
822 |
N/A |
415 |
For an X8-2, to get the max space available for the nth VM, take the number
in the table above and subtract anything previously allocated for
/u02
to the other VMs. So if you allocated 60 GB to VM1, 70 GB
to VM2, 80 GB to VM3, 60 GB to VM4 (total 270 GB) in an X8-2, the maximum available for
VM 5 would be 352 - 270 = 82 GB.
In ExaCC Gen 2, we require a minimum of 60 GB per /u02
,
so with that minimum size there is a maximum of 5 VMs in X8-2 and 6 VMs in X7-2.
X8M-2 Systems
The maximum number of VMs for an X8M-2 will be 8, regardless of whether there is local disk space or other resources available.
For an X8M-2 system, the fixed consumption per VM is 160 GB.
Total space available to all VMs on an ExaCC X8M databases node is 2500 GB. Although there is 2500 GB per database node, with a single VM, you can allocate a maximum of 900 GB local storage. Similarly, for the second VM, there is 1800 GB local storage available given the max limit of 900 GB per VM. With the third VM, the amount of space available is 2500 - (160Gb * 3) = 2020 GB. And so on for 4 and more VMs.
Table 4-2 Space allocated to VMs
#VMs | Space Consumed by VM Image or GI | X8M-2 Quarter/Half/Full Rack Space for All /u02 (GB)* |
---|---|---|
1 |
160 |
900 |
2 |
320 |
1800 |
3 |
480 |
2020 |
4 |
640 |
1860 |
5 |
800 |
1700 |
6 |
960 |
1540 |
7 |
1120 |
1380 |
8 |
1280 |
1220 |
*Max 900 GB per VM
For an X8M-2, to get the max space available for the nth VM, take the number
in the table above and subtract anything previously allocated for /u02
to the other VMs. So, for a quarter and larger rack, if you allocated 60 GB to VM1, 70
GB to VM2, 80 GB to VM3, 60 GB to VM4 (total 270 GB) in an X8M-2, the maximum available
for VM 5 would be 1700 - 270 = 1430 GB. However, the per VM maximum is 900 GB, so that
would take precedent and limits VM5 to 900 GB.
Parent topic: Introduction to Scale Up or Scale Down Operations
Scaling Local Storage Down
Scale Down Local Space Operation Guidelines
Scale down operation expects you to input local space value that you want each node to scale down to.
- Resource Limit Based On Recommended Minimums
Scale down operation must meet 60 GB recommended minimum size requirement for local storage.
- Resource Limit Based On Current Utilization
The scale down operation must leave 15% buffer on top of highest local space utilization across all nodes in the cluster.
The lowest local space per node allowed is higher of the above two limits.
Run df –kh
command on each node to find out the node with the highest
local storage.
You can also use the utility like cssh
to issue the same command from
all hosts in a cluster by typing it just once.
Lowest value of local storage each node can be scaled down to would be = 1.15x (highest value of local space used among all nodes).
Parent topic: Introduction to Scale Up or Scale Down Operations