2 Hardware Administration
This chapter provides instructions for an administrator to verify the appliance hardware configuration, collect detailed information about the hardware components, and perform standard actions such as starting and stopping a component or provisioning a compute node.
Displaying Rack Component Details
In the Service Enclave, administrators can obtain details about the appliance and its installed components. This can be done using either the Service Web UI or the Service CLI. The two interfaces display the results in a slightly different way.
Viewing Appliance Details
The administrator can retrieve certain appliance properties, which may be required when communicating with Oracle, for troubleshooting purposes, or to configure or verify settings.
Using the Service Web UI
-
In the PCA Config navigation menu, click Appliance Details.
The detail page contains system properties such as realm, region and domain. The information is read-only, except for the name.
-
To change the rack name and add an optional description, click the Edit button.
The System Details window appears. Enter a Rack Name and Description. Click Save Changes.
The Service CLI provides additional information about hardware discovery and synchronization. Any faults are displayed at the end of the command output.
Using the Service CLI
-
Display system parameters and global status with a single command:
show PcaSystem
.PCA-ADMIN> show PcaSystem Command: show PcaSystem Status: Success Time: 2021-08-19 11:20:13,937 UTC Data: Id = 934732b6-9f08-4f44-a4fc-fddcdb9967e4 Type = PcaSystem System Config State = Complete Initial Hardware Discovery Time = 2021-07-31 00:37:49,763 UTC Initial Hardware Discovery Status = Resync Success Initial Hardware Discovery Details = Error retrieving hardware data from the hardware layer. Resync Hardware Time = 2021-08-10 14:32:13,020 UTC Resync Hardware Status = Success Resync Hardware Details = Resync succeeded. System Name = oraclepca Domain Name = my.example.com Availability Domain = ad1 Realm = 1742XC3024 Region = oraclepca ASR Reminder = true Name = pca Work State = Normal FaultIds 1 = id:55f8de1e-ab25-4fc6-b6f4-a9ddd283605b type:Fault name:PcaSystemInitialHwDiscoveryStatusStatusFault(pca) FaultIds 2 = id:5c532489-6dad-45e1-a065-6c7649514ce1 type:Fault name:PcaSystemReSyncHwStatusStatusFault(pca)
-
Use the
edit PcaSystem
command to change these parameters:-
description
-
name
-
ASR reminder (whether or not to display the Oracle Auto Service Request configuration screen when an administrator logs in to the Service Web UI)
Note that the system name and domain name cannot be modified after the initial setup of the appliance.
PCA-ADMIN> edit PcaSystem name=myPca description="My Private Cloud" domainName=my.example.com systemName=mycloud asrReminder=False Command: edit PcaSystem name=myPca description="My Private Cloud" domainName=my.example.com systemName=mycloud asrReminder=False Status: Success Time: 2021-08-19 11:58:50,442 UTC JobId: 80cd1fb2-9328-42a0-89e2-7f3196246a28
Use the job ID to check the status of your edit command.
PCA-ADMIN> show Job id=80cd1fb2-9328-42a0-89e2-7f3196246a28
-
Using the Rack Units List
The Rack Units list provides an overview of installed hardware components, and lets you drill down into more detailed component information.
Using the Service Web UI
-
In the PCA Config navigation menu, click Rack Units.
The Rack Units table displays all hardware components installed in the rack and detected by the appliance software. For each component you see its host name, component type, global status information, and the rack unit number where the component is installed.
-
To view more detailed information about a component, click its host name in the table.
The detail pages for switches, storage controllers and management nodes are read-only. For compute nodes there are controls available to execute specific administrator tasks. For more information, see Performing Compute Node Operations.
The Service CLI allows you to list rack units by component type or category. It also includes an option to display information about the rack as a component.
Using the Service CLI
-
To display a list of all rack units, use the
list RackUnit
command.PCA-ADMIN> list RackUnit Command: list RackUnit Status: Success Time: 2021-08-19 12:23:55,300 UTC Data: id objtype name -- ------- ---- 29f68a0e-4744-4a92-9545-7c48fa365d0a ComputeNode pcacn001 7a0236f4-b00e-461d-93a0-b22673a18d9c ComputeNode pcacn003 dc8ae567-b07f-48e0-89bd-e57069c20010 ComputeNode pcacn002 6fb5ed14-b242-4dd5-842c-532d1c94d43f LeafSwitch pcaswlf01 279fe518-0dff-40cb-aa3a-fa0966adc946 LeafSwitch pcaswlf02 a13b5b83-0240-4014-b533-ef4a822e2a4b ManagementNode pcamn01 c24f0d26-8c22-4b2b-b8f5-be98cb25c06e ManagementNode pcamn03 c4e6bcc8-1e4c-44d5-8ca4-0ef9cd04d396 ManagementNode pcamn02 23c35224-d01e-4185-9ec6-22b538f5a5e1 ManagementSwitch pcaswmn01 8c4ecc55-7ac5-4704-bbd2-1023acf7c468 SpineSwitch pcaswsp01 231276bd-be1f-454f-923f-ffc09f68c294 SpineSwitch pcaswsp02 379690d6-4097-4637-9564-28ae890a20d2 ZFSAppliance pcasn02 ca637f6f-5269-48be-81b9-ceda76a90daf ZFSAppliance pcasn01
-
To display only rack units of a specific type, use one of these commands instead:
-
list ManagementNode
: displays a list of management nodes -
list LeafSwitch
: displays a list of leaf switches -
list SpineSwitch
: displays a list of spine switches -
list ManagementSwitch
: displays a list of 1Gbit management switches -
list ZFSAppliance
: displays a list of ZFS storage controllers -
list ComputeNode
: displays a list of compute nodes -
list Rack
: displays a list of racks that are part of the environment
Example:
PCA-ADMIN> list ManagementNode Command: list ManagementNode Status: Success Time: 2021-08-19 12:34:09,429 UTC Data: id name -- ---- a13b5b83-0240-4014-b533-ef4a822e2a4b pcamn01 c24f0d26-8c22-4b2b-b8f5-be98cb25c06e pcamn03 c4e6bcc8-1e4c-44d5-8ca4-0ef9cd04d396 pcamn02
-
-
To view more detailed information about a component, use the
show
command with the component type and its name or ID. -
Syntax (entered on a single line):
show RackUnit|ComputeNode|LeafSwitch|ManagementNode|ManagementSwitch|Rack|RackUnit|SpineSwitch|ZFSAppliance id=<component_id> OR name=<component_name>
Examples:
PCA-ADMIN> show SpineSwitch id=8c4ecc55-7ac5-4704-bbd2-1023acf7c468 Command: show SpineSwitch id=8c4ecc55-7ac5-4704-bbd2-1023acf7c468 Status: Success Time: 2021-08-19 12:50:39,570 UTC Data: Id = 8c4ecc55-7ac5-4704-bbd2-1023acf7c468 Type = SpineSwitch HW Id = FDO24290PQC MAC Address = 3c:13:cc:bd:3a:7c Ip Address = 100.96.2.20 Hostname = pcaswsp01 Firmware Version = 9.3(2) Serial Number = FDO24290PQC State = OK Rack Elevation = 22 Validation State = Validated RackId = id:dba2962d-c477-4a32-bdff-a3a256bf7972 type:Rack name:PCA X9-2 Base1 Name = pcaswsp01 Work State = Normal
PCA-ADMIN> show RackUnit name=pcamn02 Command: show RackUnit name=pcamn02 Status: Success Time: 2021-08-19 12:48:51,852 UTC Data: Id = c4e6bcc8-1e4c-44d5-8ca4-0ef9cd04d396 Type = ManagementNode HW Id = 1749XC302R MAC Address = 00:10:e0:da:cb:7c Ip Address = 100.96.2.34 Hostname = pcamn02 Firmware Version = 3.0.1 Serial Number = 1749XC302R State = running Rack Elevation = 6 Validation State = Validated RackId = id:dba2962d-c477-4a32-bdff-a3a256bf7972 type:Rack name:PCA X9-2 Base1 Name = pcamn02 Work State = Normal
Changing Passwords for Hardware Components
You can change the password for a compute node, leaf switch, management node, management switch, spine switch, or ZFS appliance component using the Service CLI. You can also change the ILOM password for a compute node or a management node.
Important:
The following password rules apply:- Passwords for compute nodes, leaf switches, management nodes, management switches, or spine switches must contain 8-20 characters.
- Passwords for ZFS appliance or ILOMs must contain 8-16 characters.
- All passwords must contain at least 1 uppercase letter, 1 lowercase letter, 1 digit, and 1 punctuation character.
Using the Service CLI
To view the components for which you can change passwords, use the changepassword
?
or the changeilompassword ?
command.
PCA-ADMIN> changepassword ? ComputeNode LeafSwitch ManagementNode ManagementSwitch SpineSwitch ZFSAppliance
PCA-ADMIN> changeilomPassword ? ComputeNode ManagementNode
To change the password for a hardware component, use the changepassword
command.
Syntax (entered on a single line):
changepassword ComputeNode|LeafSwitch|ManagementNode|ManagementSwitch|SpineSwitch|ZFSAppliance id=<component_id> OR name=<component_name> password=<new_password> confirmPassword=<repeat_new_password>
Example:
PCA-ADMIN> changePassword id=21ad5b60-d30d-4a95-b39f-5bf152005f0f password=************* confirmPassword=************* Status: Success Time: 2022-08-16 17:13:22,674 UTC JobId: fe772781-d0af-47cc-af87-2059f8e70b63
To change the ILOM password for a compute node or management node, use the
changeilompassword
command.
Syntax (entered on a single line):
changeilompassword ComputeNode|ManagementNode id=<component_id> OR name=<component_name> password=<new_password> confirmPassword=<repeat_new_password>
Example:
PCA-ADMIN> changeilomPassword id=21ad5b60-d30d-4a95-b39f-5bf152005f0f password=************* confirmPassword=************* Status: Success Time: 2022-08-16 17:13:22,674 UTC JobId: fe772781-d0af-47cc-af87-2059f8e70b63
Checking Component Health
You can get a quick health check for compute nodes, management nodes, or ZFS appliance using
the Service CLI. The
getcomputeIlomHealth
, getmgmtIlomHealth
, and
getzfsIlomHealth
commands return data from ILOM that shows, for example,
the component health is OK, service is required, or faults need to be addressed.
Using the Service CLI
To get basic health information from ILOM for compute nodes, management nodes, or the ZFS appliance, use the following commands:
Compute nodes
PCA-ADMIN> getcomputeIlomHealth Status: Success Time: 2022-08-16 11:24:42,961 EDT Data: Health Nodes 1 - macaddr = a8:69:8c:05:e8:c7 Health Nodes 1 - health = OK Health Nodes 1 - time checked = 22-07-21T20:06:34 Health Nodes 2 - macaddr = a8:69:8c:05:e8:73 Health Nodes 2 - health = OK Health Nodes 2 - time checked = 22-07-21T20:06:34 Health Nodes 3 - macaddr = 00:10:e0:fe:82:1b Health Nodes 3 - health = OK Health Nodes 3 - time checked = 22-07-21T20:06:34
Management nodes
PCA-ADMIN> getmgmtIlomHealth Status: Success Time: 2022-08-16 11:25:19,486 EDT Data: Health Nodes 1 - macaddr = A8:69:8C:05:EC:C7 Health Nodes 1 - health = OK Health Nodes 1 - time checked = 22-07-15T18:50:50 Health Nodes 2 - macaddr = A8:69:8C:05:EA:AB Health Nodes 2 - health = OK Health Nodes 2 - time checked = 22-07-15T18:50:50 Health Nodes 3 - macaddr = A8:69:8C:06:0F:A3 Health Nodes 3 - health = Service Required Health Nodes 3 - time checked = 22-07-15T18:50:50 Health Nodes 3 - node Faults 1 - messageId = SPENV-8000-A7 Health Nodes 3 - node Faults 1 - fault type = fault Health Nodes 3 - node Faults 1 - classId = fault.chassis.device.fan.fail Health Nodes 3 - node Faults 1 - uuid = c6986589-07b5-ceb0-edfc-a8535eb2f442/115ed970-a382-668c-a50a-9e854dc8479f Health Nodes 3 - node Faults 1 - time reported = 2022-07-14T22:24:36+0000 Health Nodes 3 - node Faults 1 - severity = Major Health Nodes 3 - node Faults 1 - description = Fan module has a fan that is rotating too slowly. Health Nodes 3 - node Faults 1 - action = Please refer to the associat ...
ZFS appliance
PCA-ADMIN> getzfsIlomHealth Status: Success Time: 2022-08-16 11:26:02,470 EDT Data: Health Nodes 1 - macaddr = A8:69:8C:14:BA:C7 Health Nodes 1 - health = Service Required Health Nodes 1 - time checked = 22-07-21T20:07:33 ...
Performing Compute Node Operations
From the Rack Units list of the Service Web UI, an administrator can execute certain operations on hardware components. These operations can be accessed from the Actions menu, which is the button with three vertical dots on the right hand side of each table row. In practice, only the View Details and Copy ID operations are available for all component types.
When compute nodes are in the discovery state or coming up,
their status is 'Failed' until the hardware process transitions them to 'Ready to Provision'.
This process typically takes under five minutes. If the failed state persists, use the Service CLI command list ComputeNode
to
determine the provisioning state of the compute nodes and take appropriate action.
For compute nodes, several other operations are available, either from the Actions menu or from the compute node detail page. Those operations are described in detail in this section, including the equivalent steps in the Service CLI.
Provisioning a Compute Node
Before a compute node can be used to host your compute instances, it must be provisioned by an administrator. The appliance software detects the compute nodes that are installed in the rack and cabled to the switches, meaning they appear in the Rack Units list as Ready to Provision. You can provision them from the Service Web UI or Service CLI.
Using the Service Web UI
-
In the navigation menu, click Rack Units.
-
In the Rack Units table, click the host name of the compute node you want to provision.
The compute node detail page appears.
-
In the top-right corner of the page, click Controls and select the Provision command.
Using the Service CLI
-
Display the list of compute nodes.
Copy the ID of the compute node you want to provision.
PCA-ADMIN> list ComputeNode Command: list ComputeNode Status: Success Time: 2021-08-20 08:53:56,681 UTC Data: id name provisioningState provisioningType -- ---- ----------------- ---------------- 29f68a0e-4744-4a92-9545-7c48fa365d0a pcacn001 Ready to Provision Unspecified 7a0236f4-b00e-461d-93a0-b22673a18d9c pcacn003 Ready to Provision Unspecified dc8ae567-b07f-48e0-89bd-e57069c20010 pcacn002 Ready to Provision Unspecified
-
Provision the compute node with this command:
PCA-ADMIN> provision id=7a0236f4-b00e-461d-93a0-b22673a18d9c Command: provision id=7a0236f4-b00e-461d-93a0-b22673a18d9c Status: Success Time: 2021-08-20 11:35:40,152 UTC JobId: ea93cac4-4430-4663-aafd-d70701593fb2
Use the job ID to check the status of your provision command.
PCA-ADMIN> show Job id=ea93cac4-4430-4663-aafd-d70701593fb2 [...] Done = true Name = MODIFY_TYPE Run State = Succeeded
-
Repeat the provision command for any other compute nodes you want to provision at this time.
-
Confirm that the compute nodes have been provisioned.
PCA-ADMIN> list ComputeNode Command: list ComputeNode Status: Success Time: 2021-08-20 11:38:29,509 UTC Data: id name provisioningState provisioningType -- ---- ----------------- ---------------- 29f68a0e-4744-4a92-9545-7c48fa365d0a pcacn001 Provisioned KVM 7a0236f4-b00e-461d-93a0-b22673a18d9c pcacn003 Provisioned KVM dc8ae567-b07f-48e0-89bd-e57069c20010 pcacn002 Provisioned KVM
Providing Platform Images
Platform images are provided during Private Cloud Appliance installation, and new platform images might be provided during appliance upgrade or patching operations.
During installation, upgrade, and patching, new platform images are placed on the management
node in /nfs/shared_storage/oci_compute_images
. The image import command
described in Importing Platform Images makes
the images available to Compute Enclave users.
During upgrade and patching, new versions of an image do not replace existing versions on the management node. If more than three versions of an image are available on the management node, only the newest three versions are shown when images are listed in the Compute Enclave. Older platform images are still available to users by specifying the image OCID.
Importing Platform Images
Run the importPlatformImages
command to make all images that are in
/nfs/shared_storage/oci_compute_images
on the management node also
available in all compartments in all tenancies.
Best practice is to run the importPlatformImages
command after each system
upgrade and patch in case any new images were delivered.
PCA-ADMIN> importPlatformImages Command: importPlatformImages Status: Running Time: 2022-11-10 17:35:20,345 UTC JobId: f21b9d86-ccf2-4bd3-bab9-04dc3adb2966
Use the JobId
to get more detailed information about the job. In the
following example, no new images have been delivered:
PCA-ADMIN> show job id=f21b9d86-ccf2-4bd3-bab9-04dc3adb2966 Command: show job id=f21b9d86-ccf2-4bd3-bab9-04dc3adb2966 Status: Success Time: 2022-11-10 17:35:36,023 UTC Data: Id = f21b9d86-ccf2-4bd3-bab9-04dc3adb2966 Type = Job Done = true Name = OPERATION Progress Message = There are no new platform image files to import Run State = Succeeded Transcript = 2022-11-10 17:35:20.339 : Created job OPERATION Username = admin
Listing Platform Images
Use the listplatformImages
command to list all platform images that have
been imported from the management node. If you performed an upgrade but did not yet run
importPlatformImages
, listplatformImages
might not show
all images that are on the management node.
PCA-ADMIN> listplatformImages Command: listplatformImages Status: Success Time: 2022-11-04 03:28:26,286 UTC Data: id displayName lifecycleState -- ----------- -------------- ocid1.image.unique_ID_1 uln-pca-Oracle-Linux-7.9-2022.08.29_0... AVAILABLE ocid1.image.unique_ID_2 uln-pca-Oracle-Linux-8-2022.08.29_0.oci AVAILABLE ocid1.image.unique_ID_3 uln-pca-Oracle-Solaris-11.4.35-2021.0... AVAILABLE
Compute Enclave users see the same
lifecycleState
that listplatformImages
shows. Shortly
after running importPlatformImages
, both
listplatformImages
and the Compute Enclave might show new images with lifecycleState
IMPORTING
. When the importPlatformImages
job is complete,
both listplatformImages
and the Compute Enclave show the images as
AVAILABLE
.
If you delete a platform image as shown in Deleting Platform Images, both
listplatformImages
and the Compute Enclave show the image as DELETING
or DELETED
.
Deleting Platform Images
Use the following command to delete the specified platform image. The image shows as
DELETING and then DELETED in listplatformImages
output and in the Compute Enclave, and eventually is not listed at all.
However, the image file is not deleted from the management node, and running the
importPlatformImages
command re-imports the image so that the image is
again available in all compartments.
PCA-ADMIN> deleteplatformImage imageId=ocid1.image.unique_ID_3 Command: deleteplatformImage imageId=ocid1.image.unique_ID_3 Status: Running Time: 2022-11-04 03:30:27,891 UTC JobId: 401567c3-3662-46bb-89d2-b7ad1541fa2d PCA-ADMIN> PCA-ADMIN> listplatformImages Command: listplatformImages Status: Success Time: 2022-11-04 03:30:43,159 UTC Data: id displayName lifecycleState -- ----------- -------------- ocid1.image.unique_ID_1 uln-pca-Oracle-Linux-7.9-2022.08.29_0... AVAILABLE ocid1.image.unique_ID_2 uln-pca-Oracle-Linux-8-2022.08.29_0.oci AVAILABLE ocid1.image.unique_ID_3 uln-pca-Oracle-Solaris-11.4.35-2021.0... DELETED
Disabling Compute Node Provisioning
Several compute node operations can only be performed on condition that provisioning has been disabled. This section explains how to impose and release a provisioning lock.
Using the Service Web UI
-
In the navigation menu, click Rack Units.
-
In the Rack Units table, click the host name of the compute node you want to make changes to.
The compute node detail page appears.
-
In the top-right corner of the page, click Controls and select the Provisioning Lock command.
When the confirmation window appears, click Lock to proceed.
After successful completion, the Compute Node Information tab shows Provisioning Locked = Yes.
-
To release the provisioning lock, click Controls and select the Provisioning Unlock command.
When the confirmation window appears, click Unlock to proceed.
After successful completion, the Compute Node Information tab shows Provisioning Locked = No.
Using the Service CLI
-
Display the list of compute nodes.
Copy the ID of the compute node for which you want to disable provisioning operations.
PCA-ADMIN> list ComputeNode Command: list ComputeNode Status: Success Time: 2021-08-23 09:25:56,307 UTC Data: id name provisioningState provisioningType -- ---- ----------------- ---------------- 3e62bf25-a26c-407e-ab8b-df01a4ad98b6 pcacn002 Provisioned KVM f7b8356b-052f-4911-babb-447e6ab9c78d pcacn003 Provisioned KVM 4e06ebdf-faed-484e-996d-d77af786f123 pcacn001 Provisioned KVM
-
Set a provisioning lock on the compute node.
PCA-ADMIN> provisioningLock id=f7b8356b-052f-4911-babb-447e6ab9c78d Command: provisioningLock id=f7b8356b-052f-4911-babb-447e6ab9c78d Status: Success Time: 2021-08-23 09:29:46,568 UTC JobId: 6ee78c8a-e227-4d31-a770-9b9c96085f3f
Use the job ID to check the status of your command.
PCA-ADMIN> show Job id=6ee78c8a-e227-4d31-a770-9b9c96085f3f Command: show Job id=6ee78c8a-e227-4d31-a770-9b9c96085f3f [...] Done = true Name = MODIFY_TYPE Run State = Succeeded
-
When the job has completed, confirm that the compute node is under provisioning lock.
PCA-ADMIN> show ComputeNode id=f7b8356b-052f-4911-babb-447e6ab9c78d [...] Provisioning State = Provisioned [...] Provisioning Locked = true Maintenance Locked = false
All provisioning operations are now disabled until the lock is released.
-
To release the provisioning lock, use this command:
PCA-ADMIN> provisioningUnlock id=f7b8356b-052f-4911-babb-447e6ab9c78d Command: provisioningUnlock id=f7b8356b-052f-4911-babb-447e6ab9c78d Status: Success Time: 2021-08-23 09:44:58,531 UTC JobId: 523892e8-c2d4-403c-9620-2f3e94015b46
Use the job ID to check the status of your command.
PCA-ADMIN> show Job id=523892e8-c2d4-403c-9620-2f3e94015b46 [...] Done = true Name = MODIFY_TYPE Run State = Succeeded
-
When the job has completed, confirm that the provisioning lock has been released.
PCA-ADMIN> show ComputeNode id=f7b8356b-052f-4911-babb-447e6ab9c78d [...] Provisioning State = Provisioned [...] Provisioning Locked = false Maintenance Locked = false
Locking a Compute Node for Maintenance
For maintenance operations, compute nodes must be placed in maintenance mode. This section explains how to impose and release a maintenance lock. Before you can lock a compute node for maintenance, you must disable provisioning first.
Using the Service Web UI
-
Make sure that provisioning has been disabled on the compute node.
-
In the navigation menu, click Rack Units.
-
In the Rack Units table, click the host name of the compute node that requires maintenance.
The compute node detail page appears.
-
In the top-right corner of the page, click Controls and select the Maintenance Lock command.
When the confirmation window appears, click Lock to proceed.
After successful completion, the Compute Node Information tab shows Maintenance Locked = Yes.
-
To release the maintenance lock, click Controls and select the Maintenance Unlock command.
When the confirmation window appears, click Unlock to proceed.
After successful completion, the Compute Node Information tab shows Maintenance Locked = No.
Using the Service CLI
-
Display the list of compute nodes.
Copy the ID of the compute node that requires maintenance.
PCA-ADMIN> list ComputeNode Command: list ComputeNode Status: Success Time: 2021-08-23 09:25:56,307 UTC Data: id name provisioningState provisioningType -- ---- ----------------- ---------------- 3e62bf25-a26c-407e-ab8b-df01a4ad98b6 pcacn002 Provisioned KVM f7b8356b-052f-4911-babb-447e6ab9c78d pcacn003 Provisioned KVM 4e06ebdf-faed-484e-996d-d77af786f123 pcacn001 Provisioned KVM
-
Make sure that provisioning has been disabled on the compute node.
-
Lock the compute node for maintenance.
PCA-ADMIN> maintenanceLock id=f7b8356b-052f-4911-babb-447e6ab9c78d Command: maintenanceLock id=f7b8356b-052f-4911-babb-447e6ab9c78d Status: Success Time: 2021-08-23 09:56:05,443 UTC JobId: e46f6603-2af2-4df4-a0db-b15156491f88
Use the job ID to check the status of your command.
PCA-ADMIN> show Job id=e46f6603-2af2-4df4-a0db-b15156491f88 [...] Done = true Name = MODIFY_TYPE Run State = Succeeded
-
When the job has completed, confirm that the compute node has been locked for maintenance.
PCA-ADMIN> show ComputeNode id=f7b8356b-052f-4911-babb-447e6ab9c78d [...] Provisioning State = Provisioned [...] Provisioning Locked = true Maintenance Locked = true
The compute node is now ready for maintenance.
-
To release the maintenance lock, use this command:
PCA-ADMIN> maintenanceUnlock id=f7b8356b-052f-4911-babb-447e6ab9c78d Command: maintenanceUnlock id=f7b8356b-052f-4911-babb-447e6ab9c78d Status: Success Time: 2021-08-23 10:00:53,902 UTC JobId: 625af20e-4b49-4201-879f-41d4405314c7
Use the job ID to check the status of your command.
PCA-ADMIN> show Job id=625af20e-4b49-4201-879f-41d4405314c7 [...] Done = true Name = MODIFY_TYPE Run State = Succeeded
-
When the job has completed, confirm that the provisioning lock has been released.
PCA-ADMIN> show ComputeNode id=f7b8356b-052f-4911-babb-447e6ab9c78d [...] Provisioning State = Provisioned [...] Provisioning Locked = true Maintenance Locked = false
Migrating Instances from a Compute Node
Some compute node operations, such as some maintenance operations, can only be performed if the compute node has no running compute instances. As an administrator, you can migrate all running instances away from a compute node, also known as evacuating the compute node. Instances are live migrated to other compute nodes in the same fault domain.
Important:
If some instances cannot be accommodated in other compute nodes in the current fault domain, those instances do not migrate; those instances are still running in the compute node that you are trying to evacuate. The administrator can see a list of those instances and the reason they could not be migrated.
If some instances cannot be migrated, you can request that instance owners take actions in the Compute Enclave such as moving some instances from this fault domain to a different fault domain, reconfiguring instances to use fewer resources, stopping instances that are not needed currently, or terminating any instances that are no longer needed. To check fault domain and compute node resources, see Viewing Disk Space Usage on the ZFS Storage Appliance.
Another alternative is to specify the force option on the migrate command. When you set the force option, any instances that could not be migrated are stopped.
Using the Service Web UI
-
Disable provisioning on the compute node.
-
In the navigation menu, click Rack Units.
-
In the Rack Units table, click the host name of the compute node that you want to evacuate.
The compute node details page appears.
-
In the top-right corner of the compute node details page, click Controls and select the Migrate All Vms command. Optionally set the Force option.
The Compute service migrates the running instances to other compute nodes. See the Important note at the beginning of this section.
Using the Service CLI
-
Display the list of compute nodes.
Copy the ID of the compute node that you that you want to evacuate.
PCA-ADMIN> list ComputeNode Command: list ComputeNode Status: Success Time: 2021-08-23 09:25:56,307 UTC Data: id name provisioningState provisioningType -- ---- ----------------- ---------------- 3e62bf25-a26c-407e-ab8b-df01a4ad98b6 pcacn002 Provisioned KVM f7b8356b-052f-4911-babb-447e6ab9c78d pcacn003 Provisioned KVM 4e06ebdf-faed-484e-996d-d77af786f123 pcacn001 Provisioned KVM
-
Disable provisioning on the compute node.
-
Use the
migrateVm
command to migrate all running compute instances off the compute node.PCA-ADMIN> migrateVm id=7a0236f4-b00e-461d-93a0-b22673a18d9c Command: migrateVm id=7a0236f4-b00e-461d-93a0-b22673a18d9c Status: Running Time: 2021-08-20 10:37:05,781 UTC JobId: 6f1e94bc-7d5b-4002-ada9-7d4b504a2599
To stop any instances that fail to migrate, set the
force
option:PCA-ADMIN> migrateVm id=cn_id force=true
Use the job ID to check the status of your command.
PCA-ADMIN> show Job id=6f1e94bc-7d5b-4002-ada9-7d4b504a2599 [...] Done = true Name = MODIFY_TYPE Run State = Succeeded
The Compute service migrates the running instances to other compute nodes. See the Important note at the beginning of this section.
Starting, Resetting or Stopping a Compute Node
The Service Enclave allows administrators to send start, reboot and shutdown signals to the compute nodes.
Using the Service Web UI
-
Make sure that the compute node is locked for maintenance.
-
In the navigation menu, click Rack Units.
-
In the Rack Units table, locate the compute node you want to start, reset or stop.
-
Click the Action menu (three vertical dots) and select the appropriate action: Start, Reset, or Stop.
-
When the confirmation window appears, click the appropriate action button to proceed.
A pop-up window appears for a few seconds to confirm that the compute node is starting, stopping, or restarting.
-
When the compute node is up and running again, release the maintenance and provisioning locks.
Using the Service CLI
-
Display the list of compute nodes.
Copy the ID of the compute node that you want to start, reset or stop.
PCA-ADMIN> list ComputeNode Command: list ComputeNode Status: Success Time: 2021-08-23 09:25:56,307 UTC Data: id name provisioningState provisioningType -- ---- ----------------- ---------------- 3e62bf25-a26c-407e-ab8b-df01a4ad98b6 pcacn002 Provisioned KVM f7b8356b-052f-4911-babb-447e6ab9c78d pcacn003 Provisioned KVM 4e06ebdf-faed-484e-996d-d77af786f123 pcacn001 Provisioned KVM
-
Make sure that the compute node is locked for maintenance.
-
Start, reset or stop the compute node using the corresponding command:
PCA-ADMIN> start ComputeNode id=f7b8356b-052f-4911-babb-447e6ab9c78d Command: start ComputeNode id=f7b8356b-052f-4911-babb-447e6ab9c78d Status: Success Time: 2021-08-23 09:26:06,446 UTC Data: Success
PCA-ADMIN> reset id=f7b8356b-052f-4911-babb-447e6ab9c78d Command: reset id=f7b8356b-052f-4911-babb-447e6ab9c78d Status: Success Time: 2021-08-23 09:27:06,434 UTC Data: Success
PCA-ADMIN> stop ComputeNode id=f7b8356b-052f-4911-babb-447e6ab9c78d Command: stop ComputeNode id=f7b8356b-052f-4911-babb-447e6ab9c78d Status: Success Time: 2021-08-23 09:31:38,271 UTC Data: Success
-
When the compute node is up and running again, release the maintenance and provisioning locks.
Deprovisioning a Compute Node
If you need to take a compute node out of service, for example to replace a defective one, you must deprovision it first, so that its data is removed cleanly from the system databases.
Using the Service Web UI
-
In the navigation menu, click Rack Units.
-
In the Rack Units table, click the host name of the compute node you want to deprovision.
The compute node detail page appears.
-
In the top-right corner of the page, click Controls and select the Provisioning Lock command.
When the confirmation window appears, click Lock to proceed.
After successful completion, the Compute Node Information tab shows Provisioning Locked = Yes.
-
Make sure that no more compute instances are running on the compute node.
Click Controls and select the Migrate All Vms command. The system migrates the instances to other compute nodes.
-
To deprovision the compute node, click Controls and select the Deprovision command.
When the confirmation window appears, click Deprovision to proceed.
After successful completion, the Compute Node Information tab shows Provisioning State = Ready to Provision.
Using the Service CLI
-
Display the list of compute nodes.
Copy the ID of the compute node you want to deprovision.
PCA-ADMIN> list ComputeNode Command: list ComputeNode Status: Success Time: 2021-08-20 08:53:56,681 UTC Data: id name provisioningState provisioningType -- ---- ----------------- ---------------- 29f68a0e-4744-4a92-9545-7c48fa365d0a pcacn001 Provisioned KVM 7a0236f4-b00e-461d-93a0-b22673a18d9c pcacn003 Provisioned KVM dc8ae567-b07f-48e0-89bd-e57069c20010 pcacn002 Provisioned KVM
-
Set a provisioning lock on the compute node.
PCA-ADMIN> provisioningLock id=7a0236f4-b00e-461d-93a0-b22673a18d9c Command: provisioningLock id=7a0236f4-b00e-461d-93a0-b22673a18d9c Status: Success Time: 2021-08-20 10:30:00,320 UTC JobId: ed4a4646-6d73-41f9-9cb0-73ea35e0d766
Use the job ID to check the status of your command.
PCA-ADMIN> show Job id=ed4a4646-6d73-41f9-9cb0-73ea35e0d766 [...] Done = true Name = MODIFY_TYPE Run State = Succeeded
-
Confirm that the compute node is under provisioning lock.
PCA-ADMIN> show ComputeNode id=7a0236f4-b00e-461d-93a0-b22673a18d9c [...] Provisioning Locked = true
-
Migrate all running compute instances off the compute node you want to deprovision.
PCA-ADMIN> migrateVm id=7a0236f4-b00e-461d-93a0-b22673a18d9c Command: migrateVm id=7a0236f4-b00e-461d-93a0-b22673a18d9c Status: Running Time: 2021-08-20 10:37:05,781 UTC JobId: 6f1e94bc-7d5b-4002-ada9-7d4b504a2599
Use the job ID to check the status of your command.
PCA-ADMIN> show Job id=6f1e94bc-7d5b-4002-ada9-7d4b504a2599 Command: show Job id=6f1e94bc-7d5b-4002-ada9-7d4b504a2599 Status: Success Time: 2021-08-20 10:39:59,025 UTC Data: [...] Done = true Name = MODIFY_TYPE Run State = Succeeded
-
Deprovision the compute node with this command:
PCA-ADMIN> deprovision id=7a0236f4-b00e-461d-93a0-b22673a18d9c Command: deprovision id=7a0236f4-b00e-461d-93a0-b22673a18d9c Status: Success Time: 2021-08-20 11:30:43,793 UTC JobId: 9868fdac-ddb6-4260-9ce1-c018cf2ddc8d
Use the job ID to check the status of your deprovision command.
PCA-ADMIN> show Job id=9868fdac-ddb6-4260-9ce1-c018cf2ddc8d [...] Done = true Name = MODIFY_TYPE Run State = Succeeded
-
Confirm that the compute node has been deprovisioned.
PCA-ADMIN> list ComputeNode Command: list ComputeNode Status: Success Time: 2021-08-20 08:53:56,681 UTC Data: id name provisioningState provisioningType -- ---- ----------------- ---------------- 29f68a0e-4744-4a92-9545-7c48fa365d0a pcacn001 Provisioned KVM 7a0236f4-b00e-461d-93a0-b22673a18d9c pcacn003 Ready to Provision Unspecified dc8ae567-b07f-48e0-89bd-e57069c20010 pcacn002 Provisioned KVM
Configuring the Active Directory Domain for File Storage
The file storage service in Oracle Private Cloud Appliance enables users of Microsoft Windows instances to map a network drive, or mount a network share. Both the NFS and SMB protocols are supported, but for SMB it is required that the Microsoft Windows instances and Private Cloud Appliance belong to the same Active Directory domain. This section provides instructions to set up the Active Directory domain in the Service Enclave.
Using the Service Web UI
-
Verify that DNS is configured on the appliance.
-
In the navigation menu, click Network Environment.
-
In the Network Environment Information detail page, select the DNS Servers tab and make sure that DNS servers are configured.
DNS is required because, during domain configuration, the system searches for a matching SRV record in order to locate the controllers of the Active Directory domain.
-
-
In the navigation menu, click Active Directory Domain.
-
Verify that no Active Directory domain is currently configured. The configuration details should show "Status = disabled" and "Domain = Not Available".
-
Click Edit to change the Active Directory domain configuration.
-
In the Active Directory Domain Setting window, enter these parameters:
-
the name of the Active Directory domain the appliance is meant to join
-
a user name and password that enable the appliance to join the domain
-
optionally, an organizational unit
-
-
Click Submit to apply the new configuration.
-
Verify that the Active Directory is configured correctly. The configuration details should show "Status = online" and the newly configured domain name should appear in the Domain field.
-
To remove the ZFS Storage Appliance from the Active Directory domain again, you must use the Service CLIas documented below. Refer to the final step in the Service CLI instructions.
Using the Service CLI
-
Gather the information that you need to run the command:
-
the name of the Active Directory domain the appliance is meant to join
-
an account (user name and password) with authorization to join the Active Directory domain
-
-
Verify that DNS is configured on the appliance. During domain configuration, the system searches for a matching SRV record in order to locate the controllers of the Active Directory domain.
PCA-ADMIN> show NetworkConfig Command: show NetworkConfig Status: Success Time: 2021-12-17 12:20:51,238 UTC Data: Uplink Port Speed = 100 Uplink Port Count = 2 Uplink Vlan Mtu = 9216 [...] DNS Address1 = 192.0.2.201 DNS Address2 = 192.0.2.202 DNS Address3 = 10.25.0.101 Management Node1 Hostname = mypca-mn1 Management Node2 Hostname = mypca-mn2 Management Node3 Hostname = mypca-mn3 [...] Network Config Lifecycle State = ACTIVE
-
Verify that no Active Directory domain is currently configured.
PCA-ADMIN> show ZFSAdDomain Command: show ZFSAdDomain Status: Success Time: 2021-12-17 12:17:42,734 UTC Data: Status = disabled Mode = workgroup Service href = /api/service/v2/services/ad Domain href = /api/service/v2/services/ad/domain Workgroup href = /api/service/v2/services/ad/workgroup PasswordSet = false Preexist = false Workgroup = WORKGROUP
-
Configure the Active Directory domain by entering the name of the domain, and a user name and password that enables the appliance to join the domain.
PCA-ADMIN> configZFSAdDomain domain=ad.example.com user=Administrator password=************ Command: configZFSAdDomain domain=ad.example.com user=Administrator password=***** Status: Success Time: 2021-12-17 12:24:25,333 UTC JobId: 7e6abf2d-9f6a-4c32-8f18-5142f6eda3c5
-
Use the job ID to check the status of your command.
When the job has completed successfully, verify the Active Directory zone configuration and status.
PCA-ADMIN> show ZFSAdDomain Command: show ZFSAdDomain Status: Success Time: 2021-12-17 12:35:04,944 UTC Data: Status = online Mode = domain Service href = /api/service/v2/services/ad Domain href = /api/service/v2/services/ad/domain Workgroup href = /api/service/v2/services/ad/workgroup PasswordSet = false Preexist = false
- To remove the ZFS Storage Appliance from the Active Directory
domain again, set its configuration back to workgroup
mode.
PCA-ADMIN> configZFSAdWorkgroup workgroupName=WORKGROUP Command: configZFSAdWorkgroup workgroupName=WORKGROUP Status: Success Time: 2022-08-31 07:47:38,916 UTC JobId: 1329e43a-3ed6-4588-b90b-a45506271df8 PCA-ADMIN> show zfsAdDomain Command: show zfsAdDomain Status: Success Time: 2022-08-31 07:48:07,837 UTC Data: Status = disabled Mode = workgroup Service href = /api/service/v2/services/ad Domain href = /api/service/v2/services/ad/domain Workgroup href = /api/service/v2/services/ad/workgroup PasswordSet = false Preexist = false Workgroup = WORKGROUP
Reconfiguring the Network Environment
From the Network Environment list of the Service Web UI, an administrator can edit the network environment information provided during initial system setup. Carefully plan any changes you make in this area, as these parameters provide the connections from your data center to the Private Cloud Appliance and can potentially disrupt system operations.
Editing Routing Information
Caution:
It is not supported to change your routing information for your dynamic or static network topology.
Editing Management Node Information
This section explains how to edit IP and hostname information for your management nodes.
Caution:
Changing management node parameters can cause system disruption.
Using the Service Web UI
-
In the navigation menu, click Network Environment.
-
In the Network Environment Information page, click the Management Nodes tab.
The Management Nodes details appear.
-
In the top-right corner of the page, click Edit.
-
Click Next to navigate to the page you want to edit, then update the appropriate fields.
For field descriptions, see the Initial Installation Checklist section in the Oracle Private Cloud Appliance Installation Guide.
-
Click Save Changes.
Using the Service CLI
-
Display the current network configuration information using the
show NetworkConfig
command.PCA-ADMIN> show NetworkConfig Command: show NetworkConfig Status: Success Time: 2021-09-28 17:31:33,990 UTC Data: Uplink Port Speed = 100 Uplink Port Count = 2 Uplink Vlan Mtu = 9216 Spine1 Ip = 10.n.n.12 Spine2 Ip = 10.n.n.13 Uplink Netmask = 255.255.255.0 Management VIP Hostname = ukpca01mn Management VIP 100g = 10.n.n.8 NTP Server(s) = 100.n.n.254 Uplink Port Fec = auto Public Ip range/list = 10.n.n.2/32,10.n.n.3/32,10.n.n.4/32,10.n.n.5/32,10.n.n.6/32,10.n.n.7/32 DNS Address1 = 206.n.n.1 DNS Address2 = 206.n.n.2 DNS Address3 = 10.n.n.197 Management Node1 Hostname = ukpca01-mn1 Management Node2 Hostname = ukpca01-mn2 Management Node3 Hostname = ukpca01-mn3 100g Management Node1 Ip = 10.n.n.9 100g Management Node2 Ip = 10.n.n.10 100g Management Node3 Ip = 10.n.n.11 Object Storage Ip = 10.n.n.1 Enable Admin Network = false Static Routing = true Spine VIP = 10.n.n.14 Uplink Gateway = 10.n.n.1 Uplink VLAN = 799 Uplink Hsrp Group = 61 BGP Authentication = false
-
Use the
edit NetworkConfig
command to change any of these management node parameters:-
Management Node 1 IP
-
Management Node 1 Hostname
-
Management Node 2 IP
-
Management Node 2 Hostname
-
Management Node 3 IP
-
Management Node 3 Hostname
-
Management Node VIP
-
Management Node VIP Hostname
PCA-ADMIN> edit NetworkConfig mgmt01Ip100g=172.n.n.190 mgmt02Ip100g=172.n.n.191 Command: edit NetworkConfig mgmt01Ip100g=172.n.n.190 mgmt02Ip100g=172.n.n.191 Status: Success Time: 2021-09-27 14:25:00,603 UTC JobId: 52f5177d-402a-4a52-98fe-1cff9c1f26be PCA-ADMIN>
-
Editing Data Center Uplink Information
This section explains how to edit uplink information for your configuration.
Caution:
Reconfiguring the Private Cloud Appliance connection to the data center causes an interruption of all network connectivity to and from the appliance. No network traffic is possible while the physical connections are reconfigured. All connections are automatically restored when the configuration update is complete.
Using the Service Web UI
-
In the navigation menu, click Network Environment.
-
In the Network Environment Information page, click the Uplink tab.
The Uplink details appear.
-
In the top-right corner of the page, click Edit.
-
Click Next to navigate to the page you want to edit, then update the appropriate fields.
For field descriptions, see the Initial Installation Checklist section in the Oracle Private Cloud Appliance Installation Guide.
-
Click Save Changes.
Using the Service CLI
-
Display the current network configuration information using the
show NetworkConfig
command.PCA-ADMIN> show NetworkConfig Command: show NetworkConfig Status: Success Time: 2021-09-28 17:31:33,990 UTC Data: Uplink Port Speed = 100 Uplink Port Count = 2 Uplink Vlan Mtu = 9216 Spine1 Ip = 10.n.n.12 Spine2 Ip = 10.n.n.13 Uplink Netmask = 255.255.255.0 Management VIP Hostname = ukpca01mn Management VIP 100g = 10.n.n.8 NTP Server(s) = 100.n.n.254 Uplink Port Fec = auto Public Ip range/list = 10.n.n.2/32,10.n.n.3/32,10.n.n.4/32,10.n.n.5/32,10.n.n.6/32,10.n.n.7/32 DNS Address1 = 206.n.n.1 DNS Address2 = 206.n.n.2 DNS Address3 = 10.n.n.197 Management Node1 Hostname = ukpca01-mn1 Management Node2 Hostname = ukpca01-mn2 Management Node3 Hostname = ukpca01-mn3 100g Management Node1 Ip = 10.n.n.9 100g Management Node2 Ip = 10.n.n.10 100g Management Node3 Ip = 10.n.n.11 Object Storage Ip = 10.n.n.1 Enable Admin Network = false Static Routing = true Spine VIP = 10.n.n.14 Uplink Gateway = 10.n.n.1 Uplink VLAN = 799 Uplink Hsrp Group = 61 BGP Authentication = false
-
Use the
edit NetworkConfig
command to change any of these data center uplink parameters:-
Uplink Port Speed
-
Uplink Port Count
-
Uplink VLAN MTU
-
Uplink Port FEC
PCA-ADMIN> edit NetworkConfig uplinkPortCount=2 Command: edit NetworkConfig uplinkPortCount=2 Time: 2021-09-27 14:27:00,605 UTC JobId: 42f5137f-122a-4a52-98fe-1cfv9c1f26ve PCA-ADMIN>
-
Updating NTP Server Information
This section explains how to edit or add NTP server IP addresses.
Using the Service Web UI
-
In the navigation menu, click Network Environment.
-
In the Network Environment Information page, click the NTP tab.
The NTP details appear.
-
In the top-right corner of the page, click Edit.
-
Click Next to navigate to the page you want to edit, then update the appropriate fields.
For field descriptions, see the Initial Installation Checklist section in the Oracle Private Cloud Appliance Installation Guide.
-
Click Save Changes.
Using the Service CLI
-
Display the current network configuration information using the
show NetworkConfig
command.PCA-ADMIN> show NetworkConfig Command: show NetworkConfig Status: Success Time: 2021-09-28 17:31:33,990 UTC Data: Uplink Port Speed = 100 Uplink Port Count = 2 Uplink Vlan Mtu = 9216 Spine1 Ip = 10.n.n.12 Spine2 Ip = 10.n.n.13 Uplink Netmask = 255.255.255.0 Management VIP Hostname = ukpca01mn Management VIP 100g = 10.n.n.8 NTP Server(s) = 100.n.n.254 Uplink Port Fec = auto Public Ip range/list = 10.n.n.2/32,10.n.n.3/32,10.n.n.4/32,10.n.n.5/32,10.n.n.6/32,10.n.n.7/32 DNS Address1 = 206.n.n.1 DNS Address2 = 206.n.n.2 DNS Address3 = 10.n.n.197 Management Node1 Hostname = ukpca01-mn1 Management Node2 Hostname = ukpca01-mn2 Management Node3 Hostname = ukpca01-mn3 100g Management Node1 Ip = 10.n.n.9 100g Management Node2 Ip = 10.n.n.10 100g Management Node3 Ip = 10.n.n.11 Object Storage Ip = 10.n.n.1 Enable Admin Network = false Static Routing = true Spine VIP = 10.n.n.14 Uplink Gateway = 10.n.n.1 Uplink VLAN = 799 Uplink Hsrp Group = 61 BGP Authentication = false
-
Use the
edit NetworkConfig
command to change the NTP servers. Enter multiple IP addresses in a comma-separated list:PCA-ADMIN> edit NetworkConfig ntpIps=100.n.n.254,100.n.n.253 Command: edit NetworkConfig ntpIps=100.n.n.254,100.n.n.253 Time: 2021-09-27 14:31:00,605 UTC JobId: 42f5137f-122a-4a52-98fe-1cfv9c1f26ve PCA-ADMIN>
Editing Administration Network Information
If you use the optional Administration Network, you can update the parameters using these procedures.
Caution:
If you are not currently using a separate Administration Network, the Network Environment Information page in the Service Web UI will not display an Admin Network tab or any of the related configuration parameters. You must first enable the Administration Network.
Once an Administration Network is configured, it cannot be disabled again.
Using the Service Web UI
Scenario 1: Administration Network Disabled
If you need to enable and configure a separate Administration Network, proceed as follows:
-
In the navigation menu, click Network Environment.
-
In the top-right corner of the page, click Edit.
-
In the wizard, navigate to the Admin Network tab and set Admin Networking to Enable.
-
Enter all the required parameters in the respective fields on the form.
For field descriptions, see the Initial Installation Checklist section in the Oracle Private Cloud Appliance Installation Guide.
-
Click Save Changes.
Scenario 2: Administration Network Enabled
If you already configured a separate Administration Network and need to edit its settings, proceed as follows:
-
In the navigation menu, click Network Environment.
-
In the Network Environment Information page, click the Admin Network tab.
The Admin Network details appear.
-
In the top-right corner of the page, click Edit.
-
Click Next to navigate to the page you want to edit, then update the appropriate fields.
For field descriptions, see the Initial Installation Checklist section in the Oracle Private Cloud Appliance Installation Guide.
-
Click Save Changes.
Using the Service CLI
Caution:
If you are not currently using a separate Administration Network, the Service CLI output will not display any Admin Network parameters. You must first enable the Administration Network.
-
Display the current network configuration information using the
show NetworkConfig
command.PCA-ADMIN> show NetworkConfig Command: show NetworkConfig Status: Success Time: 2022-10-11 07:13:12,186 UTC Data: Uplink Port Speed = 100 Uplink Port Count = 4 Uplink Vlan Mtu = 9216 Spine1 Ip = 10.10.10.97,10.10.10.101 Spine2 Ip = 10.10.10.99,10.10.10.103 Uplink Netmask = 255.255.255.254,255.255.255.254 Management VIP Hostname = mypca Management VIP = 10.10.10.107 NTP Server(s) = 10.80.211.105,10.211.17.1,10.68.48.1 Uplink Port Fec = auto Public Ip range/list = 10.10.10.114/31,10.10.10.116/31,10.10.10.118/31,10.10.10.120/31,10.10.10.122/31,10.10.10.124/31,10.10.10.126/32 Management Node1 Hostname = pcamn01 Management Node2 Hostname = pcamn02 Management Node3 Hostname = pcamn03 Management Node1 Ip = 10.10.10.108 Management Node2 Ip = 10.10.10.109 Management Node3 Ip = 10.10.10.110 Object Storage Ip = 10.10.10.113 Enable Admin Network = true Admin Port Speed = 100 Admin Port Count = 1 Admin Vlan Mtu = 9216 Admin Port Fec = auto Admin VLAN = 3915 Admin Spine1 Ip = 10.25.0.111 Admin Spine2 Ip = 10.25.0.112 Admin Spine VIP = 10.25.0.110 Admin Netmask = 255.255.255.0 Admin Hsrp Group = 152 Static Routing = false Uplink VLAN = 3911 Peer1 Asn = 50000 Peer1 Ip = 10.10.10.96,10.10.10.98 Oracle Asn = 136025 Bgp Topology = mesh Peer2 Asn = 50000 Peer2 Ip = 10.10.10.100,10.10.10.102 BGP Authentication = false BGP KeepAlive Timer = 60 BGP Holddown Timer = 180 Network Config Lifecycle State = ACTIVE admin DNS Address1 = 10.25.0.1 admin Management Node1 Hostname = pcamn01admin.example.com admin Management Node2 Hostname = pcamn02admin.example.com admin Management Node3 Hostname = pcamn03admin.example.com admin Management Node1 Ip = 10.25.0.101 admin Management Node2 Ip = 10.25.0.102 admin Management Node3 Ip = 10.25.0.103 admin Management VIP Hostname = mypcaadmin.example.com admin Management VIP = 10.25.0.100
-
Use the
edit NetworkConfig
command to change any of these administration network parameters:Tip:
Enter
edit networkConfig ?
to display the parameters available for editing.-
Admin Network enable
-
Management node cluster Admin VIP and host name
-
Management node 1-3 Admin IP and host name
-
Admin DNS IP 1-3
-
Admin Port count, speed, FEC
-
Admin CIDR
-
Admin VLAN and MTU
-
Admin Gateway IP
-
Admin Netmask
-
Spine 1+2 Admin IP
-
Spine Admin VIP
PCA-ADMIN> edit NetworkConfig adminPortSpeed=25 Command: edit NetworkConfig adminPortSpeed=25 Time: 2022-10-11 08:01:00,605 UTC JobId: 62f8137f-772a-4a52-98f4-1cfv9c1f24te PCA-ADMIN> edit NetworkConfig adminCidr=10.25.0.1/24 Command: edit NetworkConfig adminCidr=10.25.0.1/24 Status: Success Time: 2022-10-11 08:10:02,640 UTC JobId: 861381ae-cc63-44a2-a66e-8e095e4a99f9
-
Updating DNS Information
This section explains how to edit or add DNS IP addresses.
Using the Service Web UI
-
In the navigation menu, click Network Environment.
-
In the Network Environment Information page, click the DNS tab.
The DNS details appear.
-
In the top-right corner of the page, click Edit.
-
Click Next to navigate to the page you want to edit, then update the appropriate fields.
For field descriptions, see the Initial Installation Checklist section in the Oracle Private Cloud Appliance Installation Guide.
-
Click Save Changes.
Using the Service CLI
-
Display the current network configuration information using the
show NetworkConfig
command.PCA-ADMIN> show NetworkConfig Command: show NetworkConfig Status: Success Time: 2021-09-28 17:31:33,990 UTC Data: Uplink Port Speed = 100 Uplink Port Count = 2 Uplink Vlan Mtu = 9216 Spine1 Ip = 10.n.n.12 Spine2 Ip = 10.n.n.13 Uplink Netmask = 255.255.255.0 Management VIP Hostname = ukpca01mn Management VIP 100g = 10.n.n.8 NTP Server(s) = 100.n.n.254 Uplink Port Fec = auto Public Ip range/list = 10.n.n.2/32,10.n.n.3/32,10.n.n.4/32,10.n.n.5/32,10.n.n.6/32,10.n.n.7/32 DNS Address1 = 206.n.n.1 DNS Address2 = 206.n.n.2 DNS Address3 = 10.n.n.197 Management Node1 Hostname = ukpca01-mn1 Management Node2 Hostname = ukpca01-mn2 Management Node3 Hostname = ukpca01-mn3 100g Management Node1 Ip = 10.n.n.9 100g Management Node2 Ip = 10.n.n.10 100g Management Node3 Ip = 10.n.n.11 Object Storage Ip = 10.n.n.1 Enable Admin Network = false Static Routing = true Spine VIP = 10.n.n.14 Uplink Gateway = 10.n.n.1 Uplink VLAN = 799 Uplink Hsrp Group = 61 BGP Authentication = false
-
Use the
edit NetworkConfig
command to change the DNS IP addresses:-
DNS IP1
-
DNS IP2
-
DNS IP3
PCA-ADMIN> edit NetworkConfig DnsIp2=206.n.n.2 Command: edit NetworkConfig DnsIp2=206.n.n.2 Time: 2021-09-27 14:21:00,605 UTC JobId: 42f5137f-122a-4a52-98fe-1cfv9c1f26ve PCA-ADMIN>
-
Updating Public IP Information
This section explains how to edit the public IP addresses for your appliance. You can add public IP addresses, or change the currently configured IP addresses.
Caution:
Changing public IP addresses that are in use can cause system disruption.
Using the Service Web UI
-
In the navigation menu, click Network Environment.
-
In the Network Environment Information page, click the Uplink tab.
The Uplink details appear.
-
In the top-right corner of the page, click Edit.
-
Click Next to navigate to the page you want to edit, then update the appropriate fields.
For field descriptions, see the Initial Installation Checklist section in the Oracle Private Cloud Appliance Installation Guide.
-
Click Save Changes.
Using the Service CLI
-
Display the current network configuration information using the
show NetworkConfig
command.PCA-ADMIN> show NetworkConfig Command: show NetworkConfig Status: Success Time: 2021-09-28 17:31:33,990 UTC Data: Uplink Port Speed = 100 Uplink Port Count = 2 Uplink Vlan Mtu = 9216 Spine1 Ip = 10.n.n.12 Spine2 Ip = 10.n.n.13 Uplink Netmask = 255.255.255.0 Management VIP Hostname = ukpca01mn Management VIP 100g = 10.n.n.8 NTP Server(s) = 100.n.n.254 Uplink Port Fec = auto Public Ip range/list = 10.n.n.2/32,10.n.n.3/32,10.n.n.4/32,10.n.n.5/32,10.n.n.6/32,10.n.n.7/32 DNS Address1 = 206.n.n.1 DNS Address2 = 206.n.n.2 DNS Address3 = 10.n.n.197 Management Node1 Hostname = ukpca01-mn1 Management Node2 Hostname = ukpca01-mn2 Management Node3 Hostname = ukpca01-mn3 100g Management Node1 Ip = 10.n.n.9 100g Management Node2 Ip = 10.n.n.10 100g Management Node3 Ip = 10.n.n.11 Object Storage Ip = 10.n.n.1 Enable Admin Network = false Static Routing = true Spine VIP = 10.n.n.14 Uplink Gateway = 10.n.n.1 Uplink VLAN = 799 Uplink Hsrp Group = 61 BGP Authentication = false
-
Use the
edit NetworkConfig
command to change the public IP addresses or the object storage public IP address:-
Object Storage Public IP
-
Public IP Range/List
PCA-ADMIN> edit NetworkConfig PublicIps= 10.n.n.17/32,10.n.n.18/32,10.n.n.19/32 Command: edit NetworkConfig PublicIps= 10.n.n.17/32,10.n.n.18/32,10.n.n.19/32 Time: 2021-09-27 14:21:00,605 UTC JobId: 42f5137f-122a-4a52-98fe-1cfv9c1f26ve PCA-ADMIN>
-
Creating and Managing Exadata Networks
Oracle Private Cloud Appliance supports direct connectivity to Oracle Exadata clusters.
This section describes creating and managing Exadata networks from the Service Enclave. Before you can create an Exadata network, you must physically connect your Private Cloud Appliance to an Oracle Exadata rack. For instructions, see the "Optional Connection to Exadata" section in the chapter Configuring Oracle Private Cloud Appliance of the Oracle Private Cloud Appliance Installation Guide.
In order to use an Exadata network, the VCNs containing compute instances that connect to the database nodes, must have a dynamic routing gateway (DRG) configured. The enabled subnet needs a route rule with the Exadata CIDR as destination and the DRG as target.
For more information about Oracle Exadata Integration, see the "Network Infrastructure" section in the Hardware Overview chapter of the Oracle Private Cloud Appliance Concepts Guide.
Creating an Exadata Network
To set up a network connection between Private Cloud Appliance and an Oracle Exadata system, you need this set of parameters:
Parameter | Example Value | Description |
---|---|---|
cidr |
10.nn.nn.0/24 |
Choose a valid CIDR range that is within the CIDR range of the Oracle Exadata. |
spine1Ip |
10.nn.nn.2 |
A valid IP address in the CIDR specified. |
spine2Ip |
10.nn.nn.3 |
A valid IP address in the CIDR specified. |
spineVip |
10.nn.nn.1 |
A valid IP address in the CIDR specified. |
vlan |
3062 |
Choose a VLAN from 2 to 3899 that is not in use by the uplink VLAN or other Oracle Exadata VLANs. (VLAN 3900 to 3967, and VLAN 3968 to 4095 are reserved). |
ports |
7/1,7/2 |
Valid ports are '7/1','7/2','7/3','7/4','8/1','8/2','8/3','8/4', '9/1','9/2','9/3','9/4','10/1','10/2','10/3','10/4'. |
advertiseNetwork |
True |
True or False - enables or disables the visibility of the Exadata network to the
customer's data center servers. |
Using the Service Web UI
-
Determine the Exadata network parameters listed in the table above.
-
In the Dashboard, click the Rack Units quick action tile.
-
In the PCA Config navigation menu on the Rack Units page, click Exadata Networks.
-
In the top-right corner above the table, click Create Exadata Network.
-
Fill out the Exadata Network form using the parameters you collected in advance.
By default the network is not advertised to the data center network. You have to click the slider to set it to "on"/"true".
-
Click Submit to create the new network. It appears in the Exadata Networks table and its Lifecycle State changes to Available when the configuration has been applied successfully.
-
Next, add a subnet to the Exadata network. See Enabling Oracle Exadata Access.
Using the Service CLI
-
Determine the Exadata network parameters listed in the table above.
-
Create the Exadata network by entering the parameters.
PCA-ADMIN> exaDataCreateNetwork cidr="10.nn.nn.0/24" vlan=2001 spine1Ip="10.nn.nn.101" \ spine2Ip="10.nn.nn.102" spineVip="10.nn.nn.1" ports="7/1,7/2" Command: exaDataCreateNetwork cidr="10.nn.nn.0/24" vlan=2001 spine1Ip="10.nn.nn.101" \ spine2Ip="10.nn.nn.102" spineVip="10.nn.nn.1" ports="7/1,7/2" Status: Success Time: 2021-11-22 06:10:05,260 UTC Data: ocid1.exadata.unique_id
-
Next, add a subnet to the Exadata network. See Enabling Oracle Exadata Access.
Enabling Oracle Exadata Access
Enabling access from a subnet to the Exadata network must be done through the Service CLI.
Subnets that have been granted access, appear in the Exadata network detail page under Access Lists, grouped by their parent VCN.
Using the Service CLI
-
Get the OCID of the Exadata network you want to enable, using the
exaDataGetNetwork
command. -
Enable access to a configured Exadata network.
PCA-ADMIN> exaDataEnableAccess exadataNetworkId=ocid1.exadata.unique_id \ subnetId=ocid1.subnet.unique_id Command: exaDataEnableAccess exadataNetworkId=ocid1.exadata.unique_id \ subnetId=ocid1.subnet.unique_id Status: Success Time: 2021-11-17 18:56:45,251 UTC Data: id -- ocid1.vcn.unique_id
List Exadata Networks
Using the Service Web UI
-
In the Dashboard, click the Rack Units quick action tile.
-
In the PCA Config navigation menu on the Rack Units page, click Exadata Networks. The table contains all configured Exadata networks.
Using the Service CLI
-
Use the
exaDataListNetwork
command to display configured Exadata networks, including their OCIDs.PCA-ADMIN> exaDataListNetwork Command: exaDataListNetwork Status: Success Time: 2021-11-22 06:10:17,617 UTC Data: id vlan cidr spine1Ip spine2Ip spineVip ports -- ---- ---- -------- -------- -------- ----- ocid1.exadata.unique_id 2001 10.nn.nn.0/24 10.nn.nn.101 10.nn.nn.102 10.nn.nn.1 7/1,7/2
Get Exadata Network Details
Using the Service Web UI
-
Navigate to the Exadata Network page.
-
In the overview table, click the name (OCID) of the network for which you want to display details.
The Exadata Network detail page shows the configuration parameters, the state of the network, and the subnets that have been granted access.
Using the Service CLI
-
Get the OCID of the Exadata network for which you want details, using the
exaDataListNetwork
command. -
Use the
exaDataGetNetwork
command to display details about a specific Exadata network, including the state of the network, subnet and VCN IDs.PCA-ADMIN> exaDataGetNetwork exadataNetworkId=ocid1.exadata.unique_id Command: exaDataGetNetwork exadataNetworkId=ocid1.exadata.unique_id Status: Success Time: 2021-11-22 19:34:56,917 UTC Data: CIDR = 10.nn.nn.0/24 Vlan = 2001 Spine1Ip = 10.nn.nn.101 Spine2Ip = 10.nn.nn.102 SpineVip = 10.nn.nn.1 Ports = 7/1,7/2 advertiseNetwork = false Access List 1 - Vcn Id = ocid1.vcn.unique_id Access List 1 - Subnet Ids 1 = ocid1.subnet.unique_id Access List 1 - Subnet Ids 1 = ocid1.subnet.unique_id Access List 2 - Vcn Id = ocid1.vcn.unique_id Access List 2 - Subnet Ids 1 = ocid1.subnet.unique_id Lifecycle State = AVAILABLE
Disabling Oracle Exadata Access
Disabling access from a subnet to the Exadata network must be done through the Service CLI.
Subnets that have been granted access, appear in the Exadata network detail page under Access Lists, grouped by their parent VCN. When you disable access for a given subnet, it is removed from the Access Lists.
Using the Service CLI
-
Get the OCID of the Exadata network you want to disable, using the
exaDataListNetwork
command. -
Get the OCID of the subnet ID for the Exadata network using the
exaDataGetNetwork
command. -
Disable access to a configured Exadata network.
PCA-ADMIN> exaDataDisableAccess exadataNetworkId=ocid1.exadata.unique_id \ subnetId=ocid1.subnet.unique_id Command: exaDataDisableAccess exadataNetworkId=ocid1.exadata.unique_id \ subnetId=ocid1.subnet.unique_id Status: Success Time: 2021-11-02 11:29:49,873 UTC PCA-ADMIN> exaDatadisableAccess exadataNetworkId=ocid1.exadata.unique_id \ subnetId=ocid1.subnet.unique_id \ Command: exaDatadisableAccess exadataNetworkId=ocid1.exadata.unique_id \ subnetId=ocid1.subnet.unique_id \ Status: Success Time: 2021-12-15 11:26:40,344 UTC Data: id -- ocid1.vcn.unique_id \ PCA-ADMIN>
Deleting an Exadata Network
Using the Service Web UI
-
Make sure that, for the Exadata network you intend to delete, access has been disabled first.
-
Navigate to the Exadata Network page.
-
Choose one of these options to delete the Exadata network:
-
In the overview table, open the Actions menu on the right hand side of the row and select Delete. When prompted, click Confirm.
-
Open the Exadata network detail page, then click the Delete button in the top-right corner.
-
Using the Service CLI
-
Make sure that, for the Exadata network you intend to delete, access has been disabled first.
-
Get the OCID of the Exadata network you want to delete, using the
exaDataListNetwork
command. -
Delete the Exadata network.
PCA-ADMIN> exaDatadeleteNetwork exadataNetworkId=ocid1.exadata.unique_id Command: exaDatadeleteNetwork exadataNetworkId=ocid1.exadata.unique_id Status: Success Time: 2021-11-16 05:59:54,177 UTC