Chapter 4 The Oracle Private Cloud Appliance Command Line Interface (CLI)
- 4.1 CLI Usage
-
4.2 CLI Commands
- 4.2.1 add compute-node
- 4.2.2 add initiator
- 4.2.3 add network
- 4.2.4 add network-to-tenant-group
- 4.2.5 add nfs-exception
- 4.2.6 add node-pool
- 4.2.7 add node-pool-node
- 4.2.8 backup
- 4.2.9 configure vhbas
- 4.2.10 create iscsi-storage
- 4.2.11 create lock
- 4.2.12 create network
- 4.2.13 create nfs-storage
- 4.2.14 create kube-cluster
- 4.2.15 create oci-backup
- 4.2.16 create oci-target
- 4.2.17 create tenant-group
- 4.2.18 create uplink-port-group
- 4.2.19 delete config-error
- 4.2.20 delete iscsi-storage
- 4.2.21 delete kube-cluster
- 4.2.22 delete lock
- 4.2.23 delete network
- 4.2.24 delete nfs-storage
- 4.2.25 delete oci-backup
- 4.2.26 delete oci-target
- 4.2.27 delete task
- 4.2.28 delete tenant-group
- 4.2.29 delete uplink-port-group
- 4.2.30 deprovision compute-node
- 4.2.31 diagnose
- 4.2.32 get log
- 4.2.33 list
- 4.2.34 remove compute-node
- 4.2.35 remove initiator
- 4.2.36 remove network
- 4.2.37 remove network-from-tenant-group
- 4.2.38 remove nfs exceptions
- 4.2.39 remove node-pool
- 4.2.40 remove node-pool-node
- 4.2.41 reprovision
- 4.2.42 rerun
- 4.2.43 set system-property
- 4.2.44 set kube-dns
- 4.2.45 set kube-load-balancer
- 4.2.46 set kube-master-pool
- 4.2.47 set kube-network
- 4.2.48 set kube-vm-shape
- 4.2.49 set kube-worker-pool
- 4.2.50 show
- 4.2.51 start
- 4.2.52 start kube-cluster
- 4.2.53 stop
- 4.2.54 stop kube-cluster
- 4.2.55 update appliance
- 4.2.56 update password
- 4.2.57 update compute-node
All Oracle Private Cloud Appliance command line utilities are consolidated into a
single command line interface that is accessible via the management
node shell by running the pca-admin command
located at /usr/sbin/pca-admin
. This command is
in the system path for the root user, so you should be able to run
the command from anywhere that you are located on a management node.
The CLI provides access to all of the tools available in the
Oracle Private Cloud Appliance Dashboard, as well as many that do not have a
Dashboard equivalent. The design of the CLI makes it possible to
script actions that may need to be performed more regularly, or to
write integration scripts with existing monitoring and maintenance
software not directly hosted on the appliance.
It is important to understand that the CLI, described here, is distinct from the Oracle VM Manager command line interface, which is described fully in the Oracle VM documentation available at https://docs.oracle.com/en/virtualization/oracle-vm/3.4/cli/index.html.
In general, it is preferable that CLI usage is restricted to the active management node. While it is possible to run the CLI from either management node, some commands are restricted to the active management node and return an error if you attempt to run them on the passive management node.
4.1 CLI Usage
The Oracle Private Cloud Appliance command line interface is triggered by running the pca-admin command. It can run either in interactive mode (see Section 4.1.1, “Interactive Mode”) or in single-command mode (see Section 4.1.2, “Single-command Mode”) depending on whether you provide the syntax to run a particular CLI command when you invoke the command line interpreter.
The syntax when using the CLI is as follows:
PCA>Command
Command_Target
<Arguments>
Options
where:
-
Command
is the command type that should be initiated. For example list; -
Command_Target
is the Oracle Private Cloud Appliance component or process that should be affected by the command. For examplemanagement-node
,compute-node
,task
etc; -
<Arguments>
consist of positioning arguments related to the command target. For instance, when performing a reprovisioning action against a compute node, you should provide the specific compute node that should be affected as an argument for this command. For example: reprovision compute-nodeovcacn11r1
; -
Options
consist of options that may be provided as additional parameters to the command to affect its behavior. For instance, the list command provides various sorting and filtering options that can be appended to the command syntax to control how output is returned. For example: list compute-node --filter-columnProvisioning_State
--filterdead
. See Section 4.1.3, “Controlling CLI Output” for more information on many of these options.
The CLI includes its own internal help that can assist you with understanding the commands, command targets, arguments and options available. See Section 4.1.4, “Internal CLI Help” for more information on how to use this help system. When used in interactive mode, the CLI also provides tab completion to assist you with the correct construction of a command. See Section 4.1.1.1, “Tab Completion” for more information on this.
4.1.1 Interactive Mode
The Oracle Private Cloud Appliance command line interface (CLI) provides an interactive shell that can be used for user-friendly command line interactions. This shell provides a closed environment where users can enter commands specific to the management of the Oracle Private Cloud Appliance. By using the CLI in interactive mode, the user can avail of features like tab completion to easily complete commands correctly. By default, running the pca-admin command without providing any additional parameters causes the CLI interpreter to run in interactive mode.
It is possible to identify that you are in a CLI shell running in interactive mode as the shell prompt is indicated by PCA>.
# pca-admin Welcome to PCA! Release: 2.4.1 PCA> list management-node Management_Node IP_Address Provisioning_Status ILOM_MAC Provisioning_State Master --------------- ---------- ------------------- -------- ------------------ ------ ovcamn05r1 192.168.4.3 RUNNING 00:10:e0:e9:1f:c9 running Yes ovcamn06r1 192.168.4.4 RUNNING 00:10:e0:e7:26:ad running None ---------------- 2 rows displayed Status: Success PCA> exit #
To exit from the CLI when it is in interactive mode, you can use either the q, quit, or exit command, or alternatively use the Ctrl+D key combination.
4.1.1.1 Tab Completion
The CLI supports tab-completion when in interactive mode. This means that pressing the tab key while entering a command can either complete the command on your behalf, or can indicate options and possible values that can be entered to complete a command. Usually you must press the tab key at least twice to effect tab-completion.
Tab-completion is configured to work at all levels within the
CLI and is context sensitive. This means that you can press
the tab key to complete or prompt for commands, command
targets, options, and for certain option values. For instance,
pressing the tab key twice at a blank prompt within the CLI
automatically lists all possible commands, while pressing the
tab key after typing the first letter or few letters of a
command automatically completes the command for you. Once a
command is specified, followed by a space, pressing the tab
key indicates command targets. If you have specified a command
target, pressing the tab key indicates other options available
for the command sequence. If you press the tab key after
specifying a command option that requires an option value,
such as the --filter-column
option, the CLI
attempts to provide you with the values that can be used with
that option.
PCA><tab>
EOF backup create deprovision exit help q remove rerun shell start update add configure delete diagnose get list quit reprovision set show stop PCA> list<tab>
compute-node lock mgmt-switch-port network-port task update-task uplink-port-group config-error management-node network network-switch tenant-group uplink-port PCA> list com<tab>
pute-node
The <tab>
indicates where the
user pressed the tab key while in an interactive CLI
session. In the final example, the command target is
automatically completed by the CLI.
4.1.1.2 Running Shell Commands
It is possible to run standard shell commands while you are in the CLI interpreter shell. These can be run by either preceding them with the shell command or by using the ! operator as a shortcut to indicate that the command that follows is a standard shell command. For example:
PCA> shell date Wed Jun 5 08:15:56 UTC 2019 PCA> !uptime > /tmp/uptime-today PCA> !rm /tmp/uptime-today
4.1.2 Single-command Mode
The CLI supports 'single-command mode', which allows you to execute a single command from the shell via the CLI and to obtain the output before the CLI exits back to the shell. This is particularly useful when writing scripts that may interact with the CLI, particularly if used in conjunction with the CLI's JSON output mode described in Section 4.1.3.1, “JSON Output”.
To run the CLI in single-command mode, simply include the full command syntax that you wish to execute as parameters to the pca-admin command.
An example of single command mode is provided below:
# pca-admin list compute-node Compute_Node IP_Address Provisioning_Status ILOM_MAC Provisioning_State ------------ ---------- ------------------- -------- ------------------ ovcacn12r1 192.168.4.8 RUNNING 00:10:e0:e5:e6:d3 running ovcacn07r1 192.168.4.7 RUNNING 00:10:e0:e6:8d:0b running ovcacn13r1 192.168.4.11 RUNNING 00:10:e0:e6:f7:f7 running ovcacn14r1 192.168.4.9 RUNNING 00:10:e0:e7:15:eb running ovcacn10r1 192.168.4.12 RUNNING 00:10:e0:e7:13:8d running ovcacn09r1 192.168.4.6 RUNNING 00:10:e0:e6:f8:6f running ovcacn11r1 192.168.4.10 RUNNING 00:10:e0:e6:f9:ef running ---------------- 7 rows displayed #
4.1.3 Controlling CLI Output
The CLI provides options to control how output is returned in responses to the various CLI commands that are available. These are provided as additional options as the final portion of the syntax for a CLI command. Many of these options can make it easier to identify particular items of interest through sorting and filtering, or can be particularly useful when scripting solutions as they help to provide output that is more easily parsed.
4.1.3.1 JSON Output
JSON format is a commonly used format to represent data objects in a way that is easy to machine-parse but is equally easy for a user to read. Although JSON was originally developed as a way to represent JavaScript objects, parsers are available for a wide number of programming languages, making it an ideal output format for the CLI if you are scripting a custom solution that may need to interface directly with the CLI.
The CLI returns its output for any command in JSON format if
the --json
option is specified when a command
is run. Typically this option may be used when running the CLI
in single-command mode. An example follows:
# pca-admin list compute-node --json { "00:10:e0:e5:e6:ce": { "name": "ovcacn12r1", "ilom_state": "running", "ip": "192.168.4.8", "tenant_group_name": "Rack1_ServerPool", "state": "RUNNING", "networks": "default_external, default_internal", "ilom_mac": "00:10:e0:e5:e6:d3" }, "00:10:e0:e6:8d:06": { "name": "ovcacn07r1", "ilom_state": "running", "ip": "192.168.4.7", "tenant_group_name": "Rack1_ServerPool", "state": "RUNNING", "networks": "default_external, default_internal", "ilom_mac": "00:10:e0:e6:8d:0b" }, [...] "00:10:e0:e6:f9:ea": { "name": "ovcacn11r1", "ilom_state": "running", "ip": "192.168.4.10", "tenant_group_name": "", "state": "RUNNING", "networks": "default_external, default_internal", "ilom_mac": "00:10:e0:e6:f9:ef" } }
In some cases the JSON output may contain more information
than is displayed in the tabulated output that is usually
shown in the CLI when the --json
option is
not used. Furthermore, the keys used in the JSON output may
not map identically to the table column names that are
presented in the tabulated output.
Sorting and filtering options are currently not supported in conjunction with JSON output, since these facilities can usually be implemented on the side of the parser.
4.1.3.2 Sorting
Typically, when using the list command, you
may wish to sort information in a way that makes it easier to
view items of particular interest. This is achieved using the
--sorted-by
and
--sorted-order
options in conjunction with
the command. When using the --sorted-by
option, you must specify the column name against which the
sort should be applied. You can use the
--sorted-order
option to control the
direction of the sort. This option should be followed either
with ASC for an ascending sort, or DES for a descending sort.
If this option is not specified, the default sort order is
ascending.
For example, to sort a view of compute nodes based on the status of the provisioning for each compute node, you may do the following:
PCA> list compute-node --sorted-by Provisioning_State --sorted-order ASC Compute_Node IP_Address Provisioning_Status ILOM_MAC Provisioning_State ------------ ---------- ---------- -------- ---------- ovcacn08r1 192.168.4.9 RUNNING 00:10:e0:65:2f:b7 dead ovcacn28r1 192.168.4.10 RUNNING 00:10:e0:62:31:81 initializing_stage_wait_for_hmp ovcacn10r1 192.168.4.7 RUNNING 00:10:e0:65:2f:cf initializing_stage_wait_for_hmp ovcacn30r1 192.168.4.8 RUNNING 00:10:e0:40:cb:59 running ovcacn07r1 192.168.4.11 RUNNING 00:10:e0:62:ca:09 running ovcacn26r1 192.168.4.12 RUNNING 00:10:e0:65:30:f5 running ovcacn29r1 192.168.4.5 RUNNING 00:10:e0:31:49:1d running ovcacn09r1 192.168.4.6 RUNNING 00:10:e0:65:2f:3f running ---------------- 8 rows displayed Status: Success
Note that you can use tab-completion with the
--sorted-by
option to easily obtain the
options for different column names. See
Section 4.1.1.1, “Tab Completion”
for more information.
4.1.3.3 Filtering
Some tables may contain a large number of rows that you are
not interested in, to limit the output to items of particular
interest you can use the filtering capabilities that are built
into the CLI. Filtering is achieved using a combination of the
--filter-column
and --filter
options. The --filter-column
option must be
followed by specifying the column name, while the
--filter
option is followed with the specific
text that should be matched to form the filter. The text that
should be specified for a --filter
may
contain wildcard characters. If that is not the case, it must
be an exact match. Filtering does not currently support
regular expressions or partial matches.
For example, to view only the compute nodes that have a Provisioning state equivalent to 'dead', you could use the following filter:
PCA> list compute-node --filter-column Provisioning_State --filter dead Compute_Node IP_Address Provisioning_Status ILOM_MAC Provisioning_State ------------ ---------- ---------- -------- ---------- ovcacn09r1 192.168.4.10 DEAD 00:10:e0:0f:55:cb dead ovcacn11r1 192.168.4.9 DEAD 00:10:e0:0f:57:93 dead ovcacn14r1 192.168.4.7 DEAD 00:10:e0:46:9e:45 dead ovcacn36r1 192.168.4.11 DEAD 00:10:e0:0f:5a:9f dead ---------------- 4 rows displayed Status: Success
Note that you can use tab-completion with the
--filter-column
option to easily obtain the
options for different column names. See
Section 4.1.1.1, “Tab Completion”
for more information.
4.1.4 Internal CLI Help
The CLI includes its own internal help system. This is triggered by issuing the help command:
PCA> help Documented commands (type help <topic>): ======================================== add create diagnose list rerun start backup delete get remove set stop configure deprovision help reprovision show update Undocumented commands: ====================== EOF exit q quit shell
The help system displays all of the available commands that are supported by the CLI. These are organized into 'Documented commands' and 'Undocumented commands'. Undocumented commands are usually commands that are not specific to the management of the Oracle Private Cloud Appliance, but are mostly discussed within this documentation. Note that more detailed help can be obtained for any documented command by appending the name of the command to the help query. For example, to obtain the help documentation specific to the list command, you can do the following:
PCA> help list Usage: pca-admin list <Command Target> [OPTS] Command Targets: compute-node List computer node. config-error List configuration errors. lock List lock. management-node List management node. mgmt-switch-port List management switch port. network List active networks. network-port List network port. network-switch List network switch. task List task. tenant-group List tenant-group. update-task List update task. uplink-port List uplink port. uplink-port-group List uplink port group. Options: --json Display the output in json format. --less Display output in the less pagination mode. --more Display output in the more pagination mode. --tee=OUTPUTFILENAME Export output to a file. --sorted-by=SORTEDBY Sorting the table by a column. --sorted-order=SORTEDORDER Sorting order. --filter-column=FILTERCOLUMN Table column that needs to be filtered. --filter=FILTER filter criterion
You can drill down further into the help system for most commands by also appending the command target onto your help query:
PCA> help reprovision compute-node Usage: reprovision compute-node <compute node name> [options] Example: reprovision compute-node ovcacn11r1 Description: Reprovision a compute node.
Finally, if you submit a help query for something that doesn't exist, the help system generates an error and automatically attempts to prompt you with alternative candidates:
PCA> list ta Status: Failure Error Message: Error (MISSING_TARGET_000): Missing command target for command: list. Command targets can be: ['update-task', 'uplink-port-group', 'config-error', 'network', 'lock', 'network-port', 'tenant-group', 'network-switch', 'task', 'compute-node', 'uplink-port', 'mgmt-switch-port', 'management-node'].
4.2 CLI Commands
- 4.2.1 add compute-node
- 4.2.2 add initiator
- 4.2.3 add network
- 4.2.4 add network-to-tenant-group
- 4.2.5 add nfs-exception
- 4.2.6 add node-pool
- 4.2.7 add node-pool-node
- 4.2.8 backup
- 4.2.9 configure vhbas
- 4.2.10 create iscsi-storage
- 4.2.11 create lock
- 4.2.12 create network
- 4.2.13 create nfs-storage
- 4.2.14 create kube-cluster
- 4.2.15 create oci-backup
- 4.2.16 create oci-target
- 4.2.17 create tenant-group
- 4.2.18 create uplink-port-group
- 4.2.19 delete config-error
- 4.2.20 delete iscsi-storage
- 4.2.21 delete kube-cluster
- 4.2.22 delete lock
- 4.2.23 delete network
- 4.2.24 delete nfs-storage
- 4.2.25 delete oci-backup
- 4.2.26 delete oci-target
- 4.2.27 delete task
- 4.2.28 delete tenant-group
- 4.2.29 delete uplink-port-group
- 4.2.30 deprovision compute-node
- 4.2.31 diagnose
- 4.2.32 get log
- 4.2.33 list
- 4.2.34 remove compute-node
- 4.2.35 remove initiator
- 4.2.36 remove network
- 4.2.37 remove network-from-tenant-group
- 4.2.38 remove nfs exceptions
- 4.2.39 remove node-pool
- 4.2.40 remove node-pool-node
- 4.2.41 reprovision
- 4.2.42 rerun
- 4.2.43 set system-property
- 4.2.44 set kube-dns
- 4.2.45 set kube-load-balancer
- 4.2.46 set kube-master-pool
- 4.2.47 set kube-network
- 4.2.48 set kube-vm-shape
- 4.2.49 set kube-worker-pool
- 4.2.50 show
- 4.2.51 start
- 4.2.52 start kube-cluster
- 4.2.53 stop
- 4.2.54 stop kube-cluster
- 4.2.55 update appliance
- 4.2.56 update password
- 4.2.57 update compute-node
This section describes all of the documented commands available via the CLI.
Note that there are slight differences in the CLI commands available on Ethernet-based systems and InfiniBand-based systems. If you issue a command that is not available on your specific architecture, the command fails.
4.2.1 add compute-node
Adds a compute node to an existing tenant group. To create a new tenant group, see Section 4.2.17, “create tenant-group”.
Syntax
add compute-node
node
tenant-group-name
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where tenant-group-name
is the name of the
tenant group you wish to add one or more compute nodes to, and
node
is the name of the compute node that
should be added to the selected tenant group.
Description
Use the add compute-node command to add the required compute nodes to a tenant group you created. If a compute node is currently part of another tenant group, it is first removed from that tenant group. If custom networks are already associated with the tenant group, the newly added server is connected to those networks as well.
During add compute-node operations, Kubernetes
cluster operations should not be underway or started. If
existing Kubernetes clusters are in the tenant group, there will
be a period after the compute node is added and the
K8S_Private
network is connected that the
existing Kubernetes private cluster networks are extended. The
Kubernetes private network extension is done asynchronously
outside of the compute-node add.
Use the command add network-to-tenant-group to associate a custom network with a tenant group.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> add compute-node ovcacn09r1 myTenantGroup
Status: Success
4.2.2 add initiator
Adds an initiator to an iSCSI LUN. This allows you to control access to the iSCSI LUN shares you created on the internal ZFS storage appliance.
Syntax
add initiator
initiator IQN
LUN-name
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where LUN-name
is the name of the iSCSI LUN
share to which you are granting access using an initiator.
Description
Use the add initiator command to add an initiator to an iSCSI LUN. This command creates an initiator with provided IQN in the ZFS storage appliance and adds it to initiator group associated with an iSCSI share.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
List the initiator IQN from the virtual machine you want to have access to the LUN. Only virtual machines within the same subnet/network can have access to the filesystem. |
|
Specify the LUN you want to make available using an initiator. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> add initiatoriqn.company.com
myLUN
Status: Success
4.2.3 add network
Connects a server node to an existing network. To create a new custom network, see Section 4.2.12, “create network”.
Syntax
add network
network-name
node
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where network-name
is the name of the network
you wish to connect one or more servers to, and
node
is the name of the server node that should
be connected to the selected network.
Description
Use the add network command to connect the required server nodes to a custom network you created. When you set up custom networks between your servers, you create the network first, and then add the required servers to the network. Use the create network command to configure additional custom networks.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> add networkMyNetwork
ovcacn09r1
Status: Success
4.2.4 add network-to-tenant-group
Associates a custom network with an existing tenant group. To create a new tenant group, see Section 4.2.17, “create tenant-group”. To create a new custom network, see Section 4.2.12, “create network”.
Syntax
add network-to-tenant-group
network-name
tenant-group-name
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where network-name
is the name of an existing
custom network, and tenant-group-name
is the
name of the tenant group you wish to associate the custom
network with.
Description
Use the add network-to-tenant-group command to connect all member servers of a tenant group to a custom network. The custom network connection is configured when a server joins the tenant group, and unconfigured when a server is removed from the tenant group.
This command involves verification steps that are performed in the background. Consequently, even though output is returned and you regain control of the CLI, certain operations continue to run for some time.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> add network-to-tenant-groupmyPublicNetwork
myTenantGroup
Validating servers in the tenant group... This may take some time. The job for sync all nodes in tenant group with the new networkmyPublicNetwork
has been submitted. Please look into "/var/log/ovca.log" and "/var/log/ovca-sync.log" to monitor the progress. Status: Success
4.2.5 add nfs-exception
Adds an NFS exception to allowed clients list for an NFS share. This allows you to control access to the internal ZFS storage appliance by granting exceptions to particular groups of users.
Syntax
add nfs-exception
nfs-share-name
network or IP address
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where nfs-share-name
is the name of the NFS
share to which you are granting access using exceptions.
Description
Use the add nfs-exception command to grant a client access to the NFS share.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
List the IP address or CIDR you want to have access to the share. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> add nfs-exceptionMyNFSshare
172.16.4.0/24
Status: Success
4.2.6 add node-pool
Adds a node pool to a Kubernetes cluster. When a cluster is first
built, there are two node pools: master
and
worker
. Additional worker node pools can be
created. This is useful when a cluster needs worker nodes with
more (or less) CPU and Memory or you possibly need to create the
boot disks in an alternate repository.
The add node-pool
software command is no longer supported. Kubernetes
functions are now available through Oracle Cloud Native
Environment.
Syntax
add node-pool
cluster-name
node-pool-name
cpus
memory
repository
virtual-appliance
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where cluster-name
is the name of the Kubernetes
cluster where you wish to add a node pool.
Description
Use the add node-pool command to add a node pool to the Kubernetes cluster. A new node pool can be a different repository than the original cluster and can use a different virtual appliance if there is more than one available. The number of CPUs and memory must be within valid ranges.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Choose a name for the node pool you want to add. Once created the new node pool is empty. See Section 4.2.7, “add node-pool-node”. |
|
Specifiy the number of CPUs. Node pools can have between 1 and 24 CPUs. |
|
Specificy the amount of memory. Node pools can have between 8 and 393 GB of memory. |
|
Enter the repository that contains the virtual
appliance to be used and that will be used for the
virtual machine boot disks. A cluster can have node
pools in multiple repositories as long as they are all
attached to all of the nodes in the tenant group. If
not specified, the repository specified on the
Note that tab completion on this field returns the default repository, not a full list of storage repositories available in Oracle VM. |
|
Enter one of the pre-configured virtual appliances names. If not specified, the virtual appliance name used by the cluster is assumed. A new node pool can use a different virtual appliance from the original cluster, if there is more than one virtual appliance available.
You must add the virtual appliances ( |
Examples
PCA> add node-poolMyCluster
np0
1
8192
Status: Success
4.2.7 add node-pool-node
Adds a node to a Kubernetes cluster node pool. A host name is only required for a static network configuration. This command is used to scale up an existing worker node pool, or to replace a master node that was previously removed.
The add node-pool-node
software command is no longer supported.
Kubernetes functions are now available through Oracle Cloud Native
Environment.
Syntax
add node-pool-node
cluster-name
node-pool-name
hostname
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where cluster-name
is the name of the Kubernetes
cluster where you wish to add a node.
Description
Use the add node-pool-node command to add a
node to a node pool in the Kubernetes cluster. This command starts
the node through an asynchronous job. Progress can be viewed
through the show node-pool-node
or
list node-pool-node
commands.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Choose the node pool where you want to add a node. |
|
For a static network, a host name is required, and that host name must be able to be resolved on the static network. For DHCP, this command does not require a host name unless you are replacing a master node.
In the case of a master node replacement, you must use
the name of an existing host. You can determine the
existing host names from the |
States
The following table shows the available states for this command. All states apply to worker nodes, some states also apply to master nodes.
State |
Substate |
Description |
---|---|---|
|
|
This state is seen only in nodes in the master and worker node pools and typically only while the cluster is in CONFIGURED or BUILDING state. A node in the master node pool can return to the CONFIGURED state when a cluster is AVAILABLE if the master node is temporarily removed from the cluster in order to be re-built. |
|
|
Awaiting resources to start building. |
|
Building the node. |
|
|
Building the virtual machine and applying settings. |
|
|
Joining the Kubernetes control plane. |
|
|
|
Stopping and removing the VM. |
|
Stopping the node. The node will first be removed from the Kubernetes cluster, then the virtual machine will be stopped and removed from Oracle VM. |
|
|
|
The node finished the build process. |
|
A master node in this state is being used to interact with the Kubernetes cluster. |
|
|
|
An error occurred with the node while it was being built. The node should be removed after the error is understood.. |
|
An error occurred while the virtual machine was being
built. Checking the error message with |
|
|
An error occurred while the virtual machine was joining the Kubernetes control plane. Consult with the Kubernetes administrator on potential Kubernetes issues. |
Examples
PCA> add node-pool-nodeMyCluster
np0
myHost_1
Status: Success
4.2.8 backup
Triggers a manual backup of the Oracle Private Cloud Appliance.
The backup command can only be executed from the active management node; not from the standby management node.
Syntax
backup
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
Description
Use the backup command to initiate a backup task outside of the usual cron schedule. The backup task performs a full backup of the Oracle Private Cloud Appliance as described in Section 1.6, “Oracle Private Cloud Appliance Backup”. The CLI command does not monitor the progress of the backup task itself, and exits immediately after triggering the task, returning the task ID and name, its initial status, its progress and start time. This command must only ever be run on the active management node.
You can use the show task command to view the status of the task after you have initiated the backup. See Example 4.74, “Show Task” for more information.
Options
There are no further options for this command.
Examples
PCA> backup The backup job has been submitted. Use "show task <task id>" to monitor the progress. Task_ID Status Progress Start_Time Task_Name ------- ------ -------- ---------- --------- 3769a13df448a2 RUNNING None 06-05-2019 09:21:36 backup --------------- 1 row displayed Status: Success
4.2.9 configure vhbas
Configures vHBAs on compute nodes. This command is used only on systems with InfiniBand-based network architecture.
Syntax
configure vhbas
{
ALL
|
node
} [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where node
is the compute node name for the
compute node for which the vHBAs should be configured, and
ALL
refers to all compute nodes provisioned in
your environment.
Description
This command creates the default virtual host bus adapters (vHBAs) for fibre channel connectivity, if they do not exist. Each of the four default vHBAs corresponds with a bond on the physical server. Each vHBA connection between a server node and Fabric Interconnect has a unique mapping. Use the configure vhbas command to configure the virtual host bus adapters (vHBA) on all compute nodes or a specific subset of them.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Configure vHBAs for all compute nodes or for one or more specific compute nodes. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> configure vhbas ovcacn11r1 ovcacn14r1 Compute_Node Status ------------ ------ ovcacn14r1 Succeeded ovcacn11r1 Succeeded ---------------- 2 rows displayed Status: Success
4.2.10 create iscsi-storage
Creates a new iSCSI LUN share for a VM storage network.
Syntax
create iscsi-storage
iscsi-LUN-name
storage_network_name
LUN_size
storage-profile
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where iscsi-LUN-name
is the name of the iSCSI
LUN share you wish to create.
Description
Use this command to create an iSCSI LUN share associated with a particular network. This iSCSI LUN share can then be used by Virtual Machines that have access to the specified network.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
The name of the storage network where you wish to create the share. |
|
The size of the share in Gigabytes, for example 100G. |
|
Optionally, you can choose a pre-configured storage profile to maximize I/O performance for your environment. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> create iscsi-storagemy_iscsi_LUN
myStorageNnetwork
100G general Status: Success
4.2.11 create lock
Imposes a lock on certain appliance functionality.
Never use locks without consultation or specific instructions from Oracle Support.
Syntax
create lock
{
all_provisioning
|
cn_upgrade
|
database
|
install
|
manufacturing
|
mn_upgrade
|
provisioning
|
service
} [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
Description
Use the create lock command to temporarily disable certain appliance-level functions. The lock types are described in the Options.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Suspend all management node updates and compute node provisioning. Running tasks are completed and stop before the next stage in the process. A daemon checks for locks every few seconds. Once the lock has been removed, the update or provisioning processes continue from where they were halted. |
|
Prevent all compute node upgrade operations. |
|
Impose a lock on the databases during the management node update process. The lock is released after the update. |
|
Placeholder lock type. Currently not used. |
|
For usage in manufacturing.
This lock type prevents the first boot process from
initiating between reboots in the factory. As long as
this lock is active, the |
|
Prevent all management node upgrade operations. |
|
Prevent compute node provisioning. If a compute node provisioning process is running, it stops at the next stage. A daemon checks for locks every few seconds. Once the lock has been removed, all nodes advance to the next stage in the provisioning process. |
|
Placeholder lock type. Behavior is identical to manufacturing lock. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> create lock provisioning Status: Success
4.2.12 create network
Creates a new custom network, private or public, at the appliance level. See Section 2.6, “Network Customization” for detailed information.
Syntax
create network
network-name
{
rack_internal_network
|
external_network
port-group
|
storage_network
prefix
netmask
[zfs-ipaddress]
|
host_network
port-group
prefix
netmask
[route-destination
gateway]
} [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where network-name
is the name of the custom
network you wish to create.
If the network type is external_network
, then
the spine switch ports used for public connectivity must also be
specified as port-group
. For this purpose, you
must first create an uplink port group. See
Section 4.2.18, “create uplink-port-group”
for more information.
If the network type is storage_network
, then
mandatory additional arguments are expected. Enter the
prefix
, netmask
and the
[zfs-ipaddress]
that is assigned to the ZFS
storage appliance network interface.
If the network type is host_network
, then
additional arguments are expected. The subnet arguments are
mandatory; the routing arguments are optional.
-
prefix
: defines the fixed part of the host network subnet, depending on the netmask -
netmask
: determines which part of the subnet is fixed and which part is variable -
[route-destination]
: the external network location reachable from within the host network, which can be specified as a single valid IPv4 address or a subnet in CIDR notation. -
[gateway]
: the IP address of the gateway for the static route, which must be inside the host network subnet
The IP addresses of the hosts or physical servers are based on the prefix and netmask of the host network. The final octet is the same as the corresponding internal management IP address. The routing information from the create network command is used to configure a static route on each compute node that joins the host network.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
{
|
The type of custom network to create. The options are:
|
|
To create a custom network with external connectivity, you must specify the ports on the spine switch as well. The ports must belong to an uplink port group, and you provide the port group name as an argument in this command. |
|
To create a storage network, you must specify the prefix, netmask, and the ip address that is assigned to the ZFS storage appliance network interface. |
|
To create a custom host network, you must specify the ports on the spine switch as with an external network. The ports must belong to an uplink port group, and you provide the port group name as an argument in this command. In addition, the host network requires arguments for its subnet. The routing arguments are optional. All four arguments are explained in the Syntax section above. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> create network MyPrivateNetwork
rack_internal_network
Status: Success
PCA> create networkMyPublicNetwork
external_networkmyUplinkPortGroup
Status: Success
PCA> create network MyStorageNetwork
storage_network 10.10.10 255.255.255.0 10.10.10.1
Status: Success
4.2.13 create nfs-storage
Creates a new NFS storage share for a VM storage network.
Syntax
create nfs-storage
nfs-share-name
storage_network_name
share_size
storage-profile
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where nfs-share-name
is the name of the NFS
share you wish to create.
Description
Use this command to create an NFS share accosciated with a particular network. This NFS share can then be used by Virtual Machines that have access to the specified network.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
The name of the storage network where you wish to create the share. |
|
The size of the share in Gigabytes, for example 100G. |
|
Optionally, you can choose a pre-configured storage profile to maximize I/O performance for your environment. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> create nfs-storagemyShare
myStorageNnetwork
100G general Status: Success
4.2.14 create kube-cluster
Creates a new Kubernetes cluster definition. Once you create a cluster definition, you start that cluster to make it active. See Section 2.13.3, “Create a Kubernetes Cluster on a DHCP Network” and Section 2.13.4, “Create a Kubernetes Cluster on a Static Network” for detailed information.
The create cube-cluster
software command is no longer supported.
Kubernetes functions are now available through Oracle Cloud Native
Environment.
Syntax
create kube-cluster
cluster-name
tenant-group
external_network
load_balancer_IP_address
repository
virtual-appliance
where cluster-name
is the name of the Kubernetes
cluster you wish to create.
Description
Use the create kube-cluster command to set up a new cluster configuration for a viable Kubernetes cluster.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Choose the Oracle Private Cloud Appliance tenant group where you want to build your cluster. See Section 2.8, “Tenant Groups”. |
|
Choose an external network to connect to the cluster master node. This network should provide access to your nameserver and DHCP server, and enables the master node to act as a gateway for the worker nodes if needed. |
|
The load balancer IP address is a floating IP address
that uses Virtual Router Redundancy Protocol (VRRP) to
fail over to other master nodes when the host of the
address can no longer be contacted. The VRRP address
is auto-selected by the If other resources on the network use VRRP, assign a specific VRRP ID to the cluster to avoid VRRP collisions. See Section 4.2.45, “set kube-load-balancer”. |
|
Assign a storage repository to the cluster. Note that tab completion on this field returns the default repository, not a full list of storage repositories available in Oracle VM. |
|
Optionally, you can enter a virtual appliance, that you have downloaded, to use as a template for your Kubernetes cluster. See Section 2.13.2, “Prepare the Cluster Environment”. |
Examples
PCA> create kube-clusterMyCluster
Rack1_ServerPool
vm_public_vlan
10.10.10.250
Rack1-Repository
Kubernetes cluster configuration (MyCluster
) created Status: Success
4.2.15 create oci-backup
Creates an on-demand Oracle Cloud Infrastructure dataset backup. For more information, see Section 2.12.2, “Configuring a Manual Cloud Backup”.
Syntax
create oci-backup
target-name
target-name-2
where target-name
is the name of the Oracle Cloud Infrastructure
target where you wish to locate the backup.
Description
Use this command to create an Oracle Cloud Infrastructure backup. You can push a backup to multiple configured targets by listing mutlitple targets with this command. To configure targets, see Section 2.12.1, “Configuring the Cloud Backup Service”.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> create oci-backupOCItarget_1
OCItarget_2
Status: Success
4.2.16 create oci-target
Creates an Oracle Cloud Infrastructure target, which is the location on your Oracle Cloud Infrastructure tenancy where you want to store backups.
Syntax
create oci-target
target-name
target-location
target-user
target-bucket
target-tenancy
keyfile
where target-name
is the name of the Oracle Cloud Infrastructure
target where you wish to locate the backup.
Description
Use this command to create an Oracle Cloud Infrastructure target, and to send scheduled backups to that target. This command creates a cronjob which pushed this backup to the configured target weekly. For more information see Section 2.12.1, “Configuring the Cloud Backup Service”.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
The object storage endpoint. For a list of available endpoints, see https://docs.cloud.oracle.com/en-us/iaas/api/#/en/objectstorage/20160918/. |
|
A user that has access to your Oracle Cloud Infrastructure tenancy. |
|
A logical container for storing objects. Users or systems create buckets as needed within a region. To create a bucket for Cloud Backup feature, see Section 2.12.1, “Configuring the Cloud Backup Service”. |
|
Your Oracle Cloud Infrastructure tenancy where you wish to store backups. |
|
An API key required to access your Oracle Cloud Infrastructure tenancy. For more information see https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> create oci-target MyTarget
https://objectstorage.us-oci.com ocid1.user.oc1..oos mybucketocid1.tenancy.oc1..no /root/oci_api_key.pem
Status: Success
4.2.17 create tenant-group
Creates a new tenant group. With the tenant group, which exists at the appliance level, a corresponding Oracle VM server pool is created. See Section 2.8, “Tenant Groups” for detailed information.
Syntax
create tenant-group
tenant-group-name
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where tenant-group-name
is the name of the
tenant group – and server pool – you wish to add to the
environment.
Description
Use the create tenant-group command to set up a new placeholder for a separate group of compute nodes. The purpose of the tenant group is to group a number of compute nodes in a separate server pool. When the tenant group exists, add the required compute nodes using the add compute-node command. If you want to connect all the members of a server pool to a custom network, use the command add network-to-tenant-group.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> create tenant-group myTenantGroup
Status: Success
4.2.18 create uplink-port-group
Creates a new uplink port group. Uplink port groups define which spine switch ports are used together and in which breakout mode they operate. For detailed information, refer to the Ethernet Appliance Uplink Configuration part of the Network Requirements section in the Oracle Private Cloud Appliance Installation Guide. This command is used only on systems with Ethernet-based network architecture.
Syntax
create uplink-port-group
port-group-name
ports
{
10g-4x
|
25g-4x
|
40g
|
100g
} [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where port-group-name
is the name of the uplink
port group, which must be unique. An uplink port group consists
of a list of ports
operating in one of the
available breakout modes.
Description
Use the create uplink-port-group command to configure the ports reserved on the spine switches for external connectivity. Port 5 is configured and reserved for the default external network; ports 1-4 can be used for custom external networks. The ports can be used at their full 100Gbit bandwidth, at 40Gbit, or split with a breakout cable into four equal breakout ports: 4x 10Gbit or 4x 25Gbit. The port speed is reflected in the breakout mode of the uplink port group.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
To create an uplink port group, you must specify which
ports on the spine switches belong to the port group.
Ports must always be specified in adjacent pairs. They
are identified by their port number and optionally,
separated by a colon, also their breakout port ID. Put
the port identifiers between quotes as a
space-separated list, for example: |
{
|
Set the breakout mode of the uplink port group. When a 4-way breakout cable is used, all four ports must be set to either 10Gbit or 25Gbit. When no breakout cable is used, the port speed for the uplink port group should be either 100Gbit or 40Gbit, depending on connectivity requirements. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> create uplink-port-groupmyUplinkPortGroup
'3:1 3:2' 10g-4x Status: Success PCA> create uplink-port-groupmyStoragePortGroup
'1 2' 40g Status: Success
4.2.19 delete config-error
The delete config-error command can be used to delete a failed configuration task from the configuration error database.
Syntax
delete config-error
id
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where id
is the identifier for the
configuration error that you wish to delete from the database.
Description
Use the delete config-error command to remove
a configuration error from the configuration error database.
This is a destructive operation and you are prompted to confirm
whether or not you wish to continue, unless you use the
--confirm
flag to override the prompt.
Once a configuration error has been deleted from the database, you may not be able to re-run the configuration task associated with it. To obtain a list of configuration errors, use the list config-error command. See Example 4.49, “List All Configuration Errors” for more information.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> delete config-error 87 ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Status: Success
4.2.20 delete iscsi-storage
Deletes an iSCSI LUN share for a VM storage network.
Syntax
delete iscsi-storage
iscsi-LUN-name
where iscsi-LUN-name
is the name of the iSCSI
LUN share you wish to delete.
Description
Use this command to permanently delete an iSCSI LUN share.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> delete iscsi-storage my_iscsi_LUN
Status: Success
4.2.21 delete kube-cluster
Deletes a Kubernetes cluster configuration. The cluster must be stopped and in a CONFIGURED state for this command to work. See Section 2.13.7, “Stop a Cluster”.
The delete cube-cluster
software command is no longer supported.
Kubernetes functions are now available through Oracle Cloud Native
Environment.
Syntax
delete kube-cluster
cluster-name
where cluster-name
refers to the name of the
cluster configuration to be deleted.
Description
Use the delete kube-cluster command to delete a cluster configuration file and remove the cluster from the master configuration.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> delete kube-cluster MyCluster
Status: Success
4.2.22 delete lock
Removes a lock that was previously imposed on certain appliance functionality.
Syntax
delete lock
{
all_provisioning
|
cn_upgrade
|
database
|
install
|
manufacturing
|
mn_upgrade
|
provisioning
|
service
} [
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
Description
Use the delete lock command to re-enable the appliance-level functions that were locked earlier.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
{
|
The type of lock to be removed. For a description of lock types, see Section 4.2.11, “create lock”. |
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Example
PCA> delete lock provisioning ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Status: Success
4.2.23 delete network
Deletes a custom network. See Section 2.6, “Network Customization” for detailed information.
Syntax
delete network
network-name
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where network-name
is the name of the custom
network you wish to delete.
Description
Use the delete network command to remove a
previously created custom network from your environment. This is
a destructive operation and you are prompted to confirm whether
or not you wish to continue, unless you use the
--confirm
flag to override the prompt.
A custom network can only be deleted after all servers have been removed from it. See Section 4.2.36, “remove network”.
Default Oracle Private Cloud Appliance networks are protected and any attempt to delete them will fail.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> delete network MyNetwork
************************************************************
WARNING !!! THIS IS A DESTRUCTIVE OPERATION.
************************************************************
Are you sure [y/N]:y
Status: Success
PCA> delete network default_internal Status: Failure Error Message: Error (NETWORK_003): Exception while deleting network: default_internal. ['INVALID_NAME_002: Invalid Network name: default_internal. Name is reserved.']
4.2.24 delete nfs-storage
Deletes an NFS storage share for a VM storage network.
Syntax
delete nfs-storage
nfs-share-name
where nfs-share-name
is the name of the NFS
storage share you wish to delete.
Description
Use this command to permanently delete an NFS storage share.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> delete nfs-storage myStorageShare
Status: Success
4.2.25 delete oci-backup
Deletes an Oracle Cloud Infrastructure dataset backup. For more information, see Section 2.12.3, “Deleting Cloud Backups”.
Syntax
delete oci-backup
oci-backup-name
where oci-backup-name
is the name of the
Oracle Cloud Infrastructure backup you wish to delete.
Description
Use this command to permanently delete an Oracle Cloud Infrastructurebackup.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> delete oci-backup myOCIbackup
Status: Success
4.2.26 delete oci-target
Deletes an Oracle Cloud Infrastructure target from your ZFS storage appliance. For more information see Section 2.12.4, “Deleting Oracle Cloud InfrastructureTargets” .
Syntax
delete oci-target
oci-target-name
where oci-target-name
is the name of the
Oracle Cloud Infrastructure target you wish to delete.
Description
Use this command to permanently delete an Oracle Cloud Infrastructure target.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> delete nfs-storage myStorageShare
Status: Success
4.2.27 delete task
The delete command can be used to delete a task from the database.
Syntax
delete task
id
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where id
is the identifier for the task that
you wish to delete from the database.
Description
Use the delete task command to remove a task
from the task database. This is a destructive operation and you
are prompted to confirm whether or not you wish to continue,
unless you use the --confirm
flag to override
the prompt.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> delete task 341e7bc74f339c ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Status: Success
4.2.28 delete tenant-group
Deletes a tenant group. The default tenant group cannot be deleted. See Section 2.8, “Tenant Groups” for detailed information.
Syntax
delete tenant-group
tenant-group-name
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where tenant-group-name
is the name of the
tenant group – and server pool – you wish to add to the
environment.
Description
Use the delete tenant-group command to remove a previously created, non-default tenant group from your environment. All servers must be removed from the tenant group before it can be deleted. When the tenant group is deleted, the server pool file system is removed from the internal ZFS storage.
This is a destructive operation and you are prompted to confirm
whether or not you wish to continue, unless you use the
--confirm
flag to override the prompt.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> delete tenant-group myTenantGroup
************************************************************
WARNING !!! THIS IS A DESTRUCTIVE OPERATION.
************************************************************
Are you sure [y/N]:y
Status: Success
4.2.29 delete uplink-port-group
Deletes an uplink port group. See Section 4.2.18, “create uplink-port-group” for more information about the use of uplink port groups. This command is used only on systems with Ethernet-based network architecture.
Syntax
delete uplink-port-group
port-group-name
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where port-group-name
is the name of the uplink
port group you wish to remove from the environment.
Description
Use the delete uplink-port-group command to remove a previously created uplink port group from your environment. If the uplink port group is used in the configuration of a network, this network must be deleted before the uplink port group can be deleted. Otherwise the delete command will fail.
This is a destructive operation and you are prompted to confirm
whether or not you wish to continue, unless you use the
--confirm
flag to override the prompt.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> delete uplink-port-group myUplinkPortGroup
************************************************************
WARNING !!! THIS IS A DESTRUCTIVE OPERATION.
************************************************************
Are you sure [y/N]:y
Status: Success
4.2.30 deprovision compute-node
Cleanly removes a previously provisioned compute node's records in the various configuration databases. A provisioning lock must be applied in advance, otherwise the node is reprovisioned shortly after deprovisioning.
Syntax
deprovision compute-node
compute-node-name
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where compute-node-name
is the name of the
compute node you wish to remove from the appliance
configuration.
Description
Use the deprovision compute-node command to take an existing compute node out of the appliance in such a way that it can be repaired or replaced, and subsequently rediscovered as a brand new component. The compute node configuration records are removed cleanly from the system.
For deprovisioning to succeed, the compute node ILOM password must be the default Welcome1. If this is not the case, the operation may result in an error. This also applies to reprovisioning an existing compute node.
By default, the command does not continue if the compute node contains running VMs. The correct workflow is to impose a provisioning lock before deprovisioning a compute node, otherwise it is rediscovered and provisioned again shortly after deprovisioning has completed. When the appliance is ready to resume its normal operations, release the provisioning lock again. For details, see Section 4.2.11, “create lock” and Section 4.2.22, “delete lock”.
This is a destructive operation and you are prompted to confirm
whether or not you wish to continue, unless you use the
--confirm
flag to override the prompt.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
deprovision compute-node ovcacn29r1 ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Shutting down dhcpd: [ OK ] Starting dhcpd: [ OK ] Shutting down dnsmasq: [ OK ] Starting dnsmasq: [ OK ] Status: Success
4.2.31 diagnose
Performs various diagnostic checks against the Oracle Private Cloud Appliance for support purposes.
The diagnose software
command is
deprecated. It will be removed in the next release of the
Oracle Private Cloud Appliance Controller Software. Diagnostic functions are
now available through a separate health check tool. See
Section 2.10, “Health Monitoring” for more
information.
The other diagnose
commands remain
functional.
Syntax
diagnose
{
ilom
|
software
|
hardware
|
rack-monitor
} [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
The following table describes each possible target of the diagnose command.
Command Target |
Information Displayed |
---|---|
hardware |
The
|
ilom |
The |
leaf-switch (Ethernet-based systems only) |
The |
leaf-switch-resources (Ethernet-based systems only) |
The |
link-status (Ethernet-based systems only) |
The |
rack-monitor |
The If required, the results can be filtered by component type (cn, ilom, mn, etc.) Use tab completion to see all component types available. |
software |
The |
spine-switch (Ethernet-based systems only) |
The |
spine-switch-resources (Ethernet-based systems only) |
The |
switch-logs (Ethernet-based systems only) |
The
|
uplink-port-statistics (Ethernet-based systems only) |
The |
Description
Use the diagnose command to initiate a diagnostic check of various components that make up Oracle Private Cloud Appliance.
A large part of the diagnostic information is stored in the inventory database and the monitor database. The inventory database is populated from the initial rack installation and keeps a history log of all the rack components. The monitor database stores rack component events detected by the monitor service. Some of the diagnostic commands are used to display the contents of these databases.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
|
Returns the output of specific tests you designate, rather than running the full set of tests. |
|
Defines what version of software the command will run on. The default version is 2.4.2, but you can run the command on other version you specify here. |
Examples
PCA> diagnose ilom Checking ILOM health............please wait.. IP_Address Status Health_Details ---------- ------ -------------- 192.168.4.129 Not Connected None 192.168.4.128 Not Connected None 192.168.4.127 Not Connected None 192.168.4.126 Not Connected None 192.168.4.125 Not Connected None 192.168.4.124 Not Connected None 192.168.4.123 Not Connected None 192.168.4.122 Not Connected None 192.168.4.121 Not Connected None 192.168.4.120 Not Connected None 192.168.4.101 OK None 192.168.4.102 OK None 192.168.4.105 Faulty Mon Nov 25 14:17:37 2013 Power PS1 (Power Supply 1) A loss of AC input to a power supply has occurred. (Probability: 100, UUID: 2c1ec5fc-ffa3-c768-e602-ca12b86e3ea1, Part Number: 07047410, Serial Number: 476856F+1252CE027X, Reference Document: http://www.sun.com/msg/SPX86-8003-73) 192.168.4.107 OK None 192.168.4.106 OK None 192.168.4.109 OK None 192.168.4.108 OK None 192.168.4.112 OK None 192.168.4.113 Not Connected None 192.168.4.110 OK None 192.168.4.111 OK None 192.168.4.116 Not Connected None 192.168.4.117 Not Connected None 192.168.4.114 Not Connected None 192.168.4.115 Not Connected None 192.168.4.118 Not Connected None 192.168.4.119 Not Connected None ----------------- 27 rows displayed Status: Success
PCA> diagnose software PCA Software Acceptance Test runner utility Test - 01 - OpenSSL CVE-2014-0160 Heartbleed bug Acceptance [PASSED] Test - 02 - PCA package Acceptance [PASSED] Test - 03 - Shared Storage Acceptance [PASSED] Test - 04 - PCA services Acceptance [PASSED] Test - 05 - PCA config file Acceptance [PASSED] Test - 06 - Check PCA DBs exist Acceptance [PASSED] Test - 07 - Compute node network interface Acceptance [PASSED] Test - 08 - OVM manager settings Acceptance [PASSED] Test - 09 - Check management nodes running Acceptance [PASSED] Test - 10 - Check OVM manager version Acceptance [PASSED] Test - 11 - OVM server model Acceptance [PASSED] Test - 12 - Repositories defined in OVM manager Acceptance [PASSED] Test - 13 - Management Nodes have IPv6 disabled [PASSED] Test - 14 - Bash Code Injection Vulnerability bug Acceptance [PASSED] Test - 15 - Check Oracle VM 3.4 xen security update Acceptance [PASSED] Test - 16 - Test for ovs-agent service on CNs Acceptance [PASSED] Test - 17 - Test for shares mounted on CNs Acceptance [PASSED] Test - 18 - All compute nodes running Acceptance [PASSED] Test - 19 - PCA version Acceptance [PASSED] Test - 20 - Check support packages in PCA image Acceptance [PASSED] Status: Success
PCA> diagnose leaf-switch Switch Health Check Name Status ------ ----------------- ------ ovcasw15r1 CDP Neighbor Check Passed ovcasw15r1 Virtual Port-channel check Passed ovcasw15r1 Management Node Port-channel check Passed ovcasw15r1 Leaf-Spine Port-channel check Passed ovcasw15r1 OSPF Neighbor Check Passed ovcasw15r1 Multicast Route Check Passed ovcasw15r1 Leaf Filesystem Check Passed ovcasw15r1 Hardware Diagnostic Check Passed ovcasw16r1 CDP Neighbor Check Passed ovcasw16r1 Virtual Port-channel check Passed ovcasw16r1 Management Node Port-channel check Passed ovcasw16r1 Leaf-Spine Port-channel check Passed ovcasw16r1 OSPF Neighbor Check Passed ovcasw16r1 Multicast Route Check Passed ovcasw16r1 Leaf Filesystem Check Passed ovcasw16r1 Hardware Diagnostic Check Passed ----------------- 16 rows displayed Status: Success
4.2.32 get log
Retrieves the log files from the selected components and saves them to a directory on the rack's shared storage.
Currently the spine or data switch is the only target component supported with this command.
Syntax
get log
component
[
--confirm
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where component
is the identifier of the rack
component from which you want to retrieve the log files.
Description
Use the get log command to collect the log files of a given rack component or set of rack components of a given type. The command output indicates where the log files are saved: this is a directory on the internal storage appliance in a location that both management nodes can access. From this location you can examine the logs or copy them to your local system so they can be included in your communication with Oracle.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
Note that the CLI uses 'data_switch' as the internal alias for a spine Cisco Nexus 9336C-FX2 Switch.
PCA> get log data_switch Log files copied to: /nfs/shared_storage/incoming Status: Success
4.2.33 list
The list command can be used to list the different components and tasks within the Oracle Private Cloud Appliance. The output displays information relevant to each component or task. Output from the list command is usually tabulated so that different fields appear as columns for each row of information relating to the command target.
Syntax
list
{
backup-task
|
compute-node
|
config-error
|
iscsi-storage
|
kube-cluster
|
lock
|
management-node
|
mgmt-switch-port
|
network
|
network-card
|
network-port
|
network-switch
|
nfs-storage
|
node-pool
|
node-pool-node
|
oci-backup
|
oci-target
|
ofm-network
|
opus-port
|
server-profile
|
storage-network
|
storage-profile
|
task
|
tenant-group
|
update-task
|
uplink-port
|
uplink-port-group
|
wwpn-info
} [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
] [
[
--sorted-by SORTEDBY
|
--sorted-order SORTEDORDER
]
] [
[
--filter-column FILTERCOLUMN
|
--filter FILTER
]
]
where
is
one of the table column names returned for the selected command
target, and
SORTEDBY
can be
either SORTEDORDER
ASC
for an ascending sort, or
DES
for a descending sort. See
Section 4.1.3.2, “Sorting” for more
information.
where
is one of the table column names returned for the selected
command target, and
FILTERCOLUMN
is the text
that you wish to match to perform your filtering. See
Section 4.1.3.3, “Filtering” for more
information.
FILTER
The following table describes each possible target of the list command.
Command Target |
Information Displayed |
---|---|
backup-task |
Displays basic information about all backup tasks. |
compute-node |
Displays basic information for all compute nodes installed. |
config-error |
Displays all configuration tasks that were not completed successfully and ended in an error. |
iscsi-storage (Ethernet-based systems only) |
Displays all iSCSI LUNs for storage. |
kube-cluster (Ethernet-based systems only) |
Displays all the Kubernetes clusters. Caution This option is no longer supported. |
lock |
Displays all locks that have been imposed. |
management-node |
Displays basic information for both management nodes. |
mgmt-switch-port |
Displays connection information about every port in the Oracle Private Cloud Appliance environment belonging to the internal administration or management network. The ports listed can belong to a switch, a server node or any other connected rack component type. |
network |
Displays all networks configured in the environment. |
network-card (InfiniBand-based systems only) |
Displays information about the I/O modules installed in the Fabric Interconnects. |
network-port |
Displays the status of all ports on all I/O modules installed in the networking components. |
network-switch (Ethernet-based systems only) |
Displays basic information about all switches installed in the Oracle Private Cloud Appliance environment. |
nfs-storage (Ethernet-based systems only) |
Displays NFS shares for storage. |
node-pool (Ethernet-based systems only) |
Displays all the Kubernetes node pools. Caution
This option is no longer supported. |
node-pool-node (Ethernet-based systems only) |
Displays all the Kubernetes nodes. Caution
This option is no longer supported. |
oci-backup |
Displays all the Oracle Cloud Infrastructure backups. |
oci-target |
Displays all the Oracle Cloud Infrastructure targets. |
ofm-network (InfiniBand-based systems only) |
Displays network configuration, read directly from the Oracle Fabric Manager software on the Fabric Interconnects. |
opus-port (InfiniBand-based systems only) |
Displays connection information about every port of every Oracle Switch ES1-24 in the Oracle Private Cloud Appliance environment. |
server-profile (InfiniBand-based systems only) |
Displays a list of connectivity profiles for servers, as stored by the Fabric Interconnects. The profile contains essential networking and storage information for the server in question. |
storage-network |
Displays a list of known storage clouds on InfiniBand-based systems. The configuration of each storage cloud contains information about participating Fabric Interconnect ports and server vHBAs. Displays a list of known storage networks on Ethernet-based systems. |
storage-profile (Ethernet-based systems only) |
Displays all the storage profiles. |
task |
Displays a list of running, completed and failed tasks. |
tenant-group |
Displays all configured tenant groups. The list includes the default configuration as well as custom tenant groups. |
update-task |
Displays a list of all software update tasks that have been started on the appliance. |
uplink-port (Ethernet-based systems only) |
Displays information about spine switch port configurations for external networking. |
uplink-port-group (Ethernet-based systems only) |
Displays information about all uplink port groups configured for external networking. |
wwpn-info (InfiniBand-based systems only) |
Displays a list of all World Wide Port Names (WWPNs) for all ports participating in the Oracle Private Cloud Appliance Fibre Channel fabric. In the standard configuration each compute node has a vHBA in each of the four default storage clouds. |
Note that you can use tab completion to help you correctly
specify the object
for the different command
targets. You do not need to specify an object
if the command target is system-properties
or
version
.
Description
Use the list command to obtain tabulated listings of information about different components or activities within the Oracle Private Cloud Appliance. The list command can frequently be used to obtain identifiers that can be used in conjunction with many other commands to perform various actions or to obtain more detailed information about a specific component or task. The list command also supports sorting and filtering capabilities to allow you to order information or to limit information so that you are able to identify specific items of interest quickly and easily.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
The command target to list information for. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
[
--sorted-by |
Sort the table by the values within a particular
column in the table, specified by replacing
|
[
--sorted-order |
Used to specify the sort order, which can either be
|
[
--filter-column |
Filter the table for a value within a particular
column in the table, specified by replacing
|
[
--filter |
The filter that should be applied to values within the
column specified by the
|
Examples
PCA> list management-node Management_Node IP_Address Provisioning_Status ILOM_MAC Provisioning_State Master --------------- ---------- ------------------- -------- ------------------ ------ ovcamn05r1 192.168.4.3 RUNNING 00:10:e0:e9:1f:c9 running None ovcamn06r1 192.168.4.4 RUNNING 00:10:e0:e7:26:ad running Yes ---------------- 2 rows displayed Status: Success
PCA> list compute-node Compute_Node IP_Address Provisioning_Status ILOM_MAC Provisioning_State ------------ ---------- ------------------- -------- ------------------ ovcacn10r1 192.168.4.7 RUNNING 00:10:e0:65:2f:4b running ovcacn08r1 192.168.4.5 RUNNING 00:10:e0:65:2f:f3 initializing_stage_wait_... ovcacn09r1 192.168.4.10 RUNNING 00:10:e0:62:98:e3 running ovcacn07r1 192.168.4.8 RUNNING 00:10:e0:65:2f:93 running ---------------- 4 rows displayed Status: Success
PCA> list tenant-group Name Default State ---- ------- ----- Rack1_ServerPool True ready myTenantGroup False ready ---------------- 2 rows displayed Status: Success
PCA> list network Network_Name Default Type Trunkmode Description ------------ ------- ---- --------- ----------- custom_internal False rack_internal_network None None default_internal True rack_internal_network None None storage_net False host_network None None default_external True external_network None None ---------------- 4 rows displayed Status: Success
PCA> list network-port Port Switch Type State Networks ---- ------ ---- ----- -------- 1 ovcasw22r1 40G up storage_net 2 ovcasw22r1 40G up storage_net 3 ovcasw22r1 auto-speed down None 4 ovcasw22r1 auto-speed down None 5:1 ovcasw22r1 10G up default_external 5:2 ovcasw22r1 10G down default_external 5:3 ovcasw22r1 10G down None 5:4 ovcasw22r1 10G down None 1 ovcasw23r1 40G up storage_net 2 ovcasw23r1 40G up storage_net 3 ovcasw23r1 auto-speed down None 4 ovcasw23r1 auto-speed down None 5:1 ovcasw23r1 10G up default_external 5:2 ovcasw23r1 10G down default_external 5:3 ovcasw23r1 10G down None 5:4 ovcasw23r1 10G down None ----------------- 16 rows displayed Status: Success
Note that the CLI uses the internal alias
mgmt-switch-port
. In this example the
command displays all internal Ethernet connections from
compute nodes to the Cisco Nexus 9348GC-FXP Switch. A wildcard is used in
the --filter
option.
PCA> list mgmt-switch-port --filter-column=Hostname --filter=*cn*r1 Dest Dest_Port Hostname Key MGMTSWITCH RACK RU Src_Port Type ---- --------- -------- --- ---------- ---- -- -------- ---- 07 Net-0 ovcacn07r1 CISCO-1-5 CISCO-1 1 7 5 compute 08 Net-0 ovcacn08r1 CISCO-1-6 CISCO-1 1 8 6 compute 09 Net-0 ovcacn09r1 CISCO-1-7 CISCO-1 1 9 7 compute 10 Net-0 ovcacn10r1 CISCO-1-8 CISCO-1 1 10 8 compute 11 Net-0 ovcacn11r1 CISCO-1-9 CISCO-1 1 11 9 compute 12 Net-0 ovcacn12r1 CISCO-1-10 CISCO-1 1 12 10 compute 13 Net-0 ovcacn13r1 CISCO-1-11 CISCO-1 1 13 11 compute 14 Net-0 ovcacn14r1 CISCO-1-12 CISCO-1 1 14 12 compute 34 Net-0 ovcacn34r1 CISCO-1-15 CISCO-1 1 34 15 compute 35 Net-0 ovcacn35r1 CISCO-1-16 CISCO-1 1 35 16 compute 36 Net-0 ovcacn36r1 CISCO-1-17 CISCO-1 1 36 17 compute 37 Net-0 ovcacn37r1 CISCO-1-18 CISCO-1 1 37 18 compute 38 Net-0 ovcacn38r1 CISCO-1-19 CISCO-1 1 38 19 compute 39 Net-0 ovcacn39r1 CISCO-1-20 CISCO-1 1 39 20 compute 40 Net-0 ovcacn40r1 CISCO-1-21 CISCO-1 1 40 21 compute 41 Net-0 ovcacn41r1 CISCO-1-22 CISCO-1 1 41 22 compute 42 Net-0 ovcacn42r1 CISCO-1-23 CISCO-1 1 42 23 compute 26 Net-0 ovcacn26r1 CISCO-1-35 CISCO-1 1 26 35 compute 27 Net-0 ovcacn27r1 CISCO-1-36 CISCO-1 1 27 36 compute 28 Net-0 ovcacn28r1 CISCO-1-37 CISCO-1 1 28 37 compute 29 Net-0 ovcacn29r1 CISCO-1-38 CISCO-1 1 29 38 compute 30 Net-0 ovcacn30r1 CISCO-1-39 CISCO-1 1 30 39 compute 31 Net-0 ovcacn31r1 CISCO-1-40 CISCO-1 1 31 40 compute 32 Net-0 ovcacn32r1 CISCO-1-41 CISCO-1 1 32 41 compute 33 Net-0 ovcacn33r1 CISCO-1-42 CISCO-1 1 33 42 compute ----------------- 25 rows displayed Status: Success
PCA> list task Task_ID Status Progress Start_Time Task_Name ------- ------ -------- ---------- --------- 376a676449206a SUCCESS 100 06-06-2019 09:00:01 backup 376ce11fc6c39c SUCCESS 100 06-06-2019 04:23:41 update_download_image 376a02cf798f68 SUCCESS 100 06-05-2019 21:00:02 backup 376c7c8afcc86a SUCCESS 100 06-05-2019 09:00:01 backup ---------------- 4 rows displayed Status: Success
PCA> list uplink-port Interface Name Switch Status Admin_Status PortChannel Speed -------------- ------ ------ ------------ ----------- ----- Ethernet1/1 ovcasw22r1 up up 111 40G Ethernet1/1 ovcasw23r1 up up 111 40G Ethernet1/2 ovcasw22r1 up up 111 40G Ethernet1/2 ovcasw23r1 up up 111 40G Ethernet1/3 ovcasw22r1 down down None auto Ethernet1/3 ovcasw23r1 down down None auto Ethernet1/4 ovcasw22r1 down down None auto Ethernet1/4 ovcasw23r1 down down None auto Ethernet1/5/1 ovcasw22r1 up up 151 10G Ethernet1/5/1 ovcasw23r1 up up 151 10G Ethernet1/5/2 ovcasw22r1 down up 151 10G Ethernet1/5/2 ovcasw23r1 down up 151 10G Ethernet1/5/3 ovcasw22r1 down down None 10G Ethernet1/5/3 ovcasw23r1 down down None 10G Ethernet1/5/4 ovcasw22r1 down down None 10G Ethernet1/5/4 ovcasw23r1 down down None 10G ----------------- 16 rows displayed Status: Success
PCA> list uplink-port-group Port_Group_Name Ports Mode Speed Breakout_Mode Enabled State --------------- ----- ---- ----- ------------- ------- ----- default_5_1 5:1 5:2 LAG 10g 10g-4x True (up)* Not all ports are up default_5_2 5:3 5:4 LAG 10g 10g-4x False down ---------------- 2 rows displayed Status: Success
PCA> list config-error ID Module Host Timestamp -- ------ ---- --------- 87 Management node password 192.168.4.4 Mon Jun 03 02:45:42 2019 54 MySQL management password 192.168.4.216 Mon Jun 03 02:44:54 2019 ---------------- 2 rows displayed Status: Success
PCA> list storage-profile Name Type Default ---- ---- ------- dbms_demo iscsi N general iscsi Y bkup_basic iscsi N general nfs Y bkup_basic nfs N dbms_demo nfs N ---------------- 6 rows displayed Status: Success
4.2.34 remove compute-node
Removes a compute node from an existing tenant group.
Syntax
remove compute-node
node
tenant-group-name
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where tenant-group-name
is the name of the
tenant group you wish to remove one or more compute nodes from,
and node
is the name of the compute node that
should be removed from the selected tenant group.
Description
Use the remove compute-node command to remove the required compute nodes from their tenant group. Use Oracle VM Manager to prepare the compute nodes first: make sure that virtual machines have been migrated away from the compute node, and that no storage repositories are presented. Custom networks associated with the tenant group are removed from the compute node, not from the tenant group.
This is a destructive operation and you are prompted to confirm
whether or not you wish to continue, unless you use the
--confirm
flag to override the prompt.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> remove compute-node ovcacn09r1 myTenantGroup
************************************************************
WARNING !!! THIS IS A DESTRUCTIVE OPERATION.
************************************************************
Are you sure [y/N]:y
Status: Success
4.2.35 remove initiator
Removes an initiator from an iSCSI LUN, thereby removing access to the iSCSI LUN from that initiator.
Syntax
remove initiator
initiator IQN
LUN-name
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where LUN-name
is the name of the iSCSI LUN
share to which you are revoking access for the listed initiator.
Description
Use the remove initiator command to remove an initiator from an iSCSI LUN.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
List the initiator IQN from the virtual machine that should no longer have access to the LUN. |
|
Specify the LUN you want remove the initiator from. |
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> remove initiatoriqn.company.com
myLUN
Status: Success
4.2.36 remove network
Disconnects a server node from a network.
Syntax
remove network
network-name
node
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where network-name
is the name of the network
from which you wish to disconnect one or more servers, and
node
is the name of the server node that should
be disconnected from the selected network.
Description
Use the remove network command to disconnect
server nodes from a custom network you created. In case you want
to delete a custom network from your environment, you must first
disconnect all the servers from that network. Then use the
delete network command to delete the custom
network configuration. This is a destructive operation and you
are prompted to confirm whether or not you wish to continue,
unless you use the --confirm
flag to override
the prompt.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> remove networkMyNetwork
ovcacn09r1
************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Status: Success
4.2.37 remove network-from-tenant-group
Removes a custom network from a tenant group.
Syntax
remove network-from-tenant-group
network-name
tenant-group-name
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where network-name
is the name of a custom
network associated with a tenant group, and
tenant-group-name
is the name of the tenant
group you wish to remove the custom network from.
Description
Use the remove network-from-tenant-group command to break the association between a custom network and a tenant group. The custom network is unconfigured from all tenant group member servers.
This is a destructive operation and you are prompted to confirm
whether or not you wish to continue, unless you use the
--confirm
flag to override the prompt.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> remove network-from-tenant-groupmyPublicNetwork
myTenantGroup
************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Status: Success
4.2.38 remove nfs exceptions
Removes an NFS exception, thereby removing access to the NFS share from the listed machine.
Syntax
remove nfs-exception
nfs-share-name
network or IP address
[
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where nfs-share-name
is the name of the NFS
share to which you are granting access using exceptions.
Description
Use the remove nfs-exception command to remove an nfs-exception from a share.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
List the IP address or CIDR that should no longer have access to the share. |
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> remove nfs-exceptionmyNFSshare
172.16.4.0/24
Status: Success
4.2.39 remove node-pool
Removes a node pool definition from a Kubernetes cluster.
The remove node-pool
software command is no longer supported. Kubernetes
functions are now available through Oracle Cloud Native
Environment.
Syntax
remove node-pool
cluster-name
node-pool-name
where cluster-name
is the name of the Kubernetes
cluster from which you wish to remove a node pool.
Description
Use the remove node-pool command to remove a node pool from the Kubernetes cluster. The node pool must be empty before it can be removed. See Section 4.2.40, “remove node-pool-node”.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Choose the node pool you want to remove. A nodepool
must be empty to remove it. The
|
|
Force the command to be executed even if the target is in an invalid state or contains nodes. This option is not risk-free and should only be used as a last resort. In the case that there are nodes in the node pool, the command will attempt to gracefully remove workers from Kubernetes. The Kubernetes administrator should be notified of all worker nodes that were run with this option. |
Examples
PCA> remove node-poolMyCluster
np0
Status: Success
4.2.40 remove node-pool-node
Removes a node from the Kubernetes cluster and deletes the virtual machine.
The remove node-pool-node
software command is no longer supported.
Kubernetes functions are now available through Oracle Cloud Native
Environment.
Syntax
remove node-pool-node
cluster-name
node-pool-name
hostname
where cluster-name
is the name of the Kubernetes
cluster from which you wish to remove a node.
Description
Use the remove node-pool-node command to remove a node from the Kubernetes cluster. Once a node is removed from the Kubernetes cluster, the virtual machine will be stopped and destroyed and the configuration information will be removed from the cluster unless the node is in the master node that the node was removed from the Kubernetes cluster.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Choose the node pool where you want to remove a node. Nodes can be removed from any node pool. Only two nodes can be removed from the master node pool. |
|
Enter the host name you want to remove from the node pool. |
|
If your first try to remove a master or worker node
fails, perform a retry without the
Force the command to be executed even if the target is in an invalid state. When completed, the Kubernetes administrator should be informed of the node removed as it may be left in a Not Ready state in the Kubernetes cluster. If this is the case, the Kubernetes administrator must delete the node. This option is not risk-free and should only be used as a last resort. |
Examples
PCA> remove node-pool-nodeMyCluster
np0
myHost_1
************************************************************************************* WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************************************* Are you sure [y/N]:y Node (myHost_1) removed Status: Success
PCA> remove node-pool-nodeMyCluster
master
cluster_master_1
************************************************************************************* WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************************************* Are you sure [y/N]:y Node (myHost_1) removed Status: Success
4.2.41 reprovision
The reprovision command can be used to trigger reprovisioning for a specified compute node within the Oracle Private Cloud Appliance.
Reprovisioning restores a compute node to a clean state. If a compute node was previously added to the Oracle VM environment and has active connections to storage repositories other than those on the internal ZFS storage, the external storage connections need to be configured again after reprovisioning.
Syntax
reprovision
{
compute-node
}
node
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
] [
--force
] [
--save-local-repo
]
where node
is the compute node name for the
compute node that should be reprovisioned.
Description
Use the reprovision command to reprovision a specified compute node. The provisioning process is described in more detail in Section 1.4, “Provisioning and Orchestration”.
The reprovision command triggers a task that is responsible for handling the reprovisioning process and exits immediately with status 'Success' if the task has been successfully generated. This does not mean that the reprovisioning process itself has completed successfully. To monitor the status of the reprovisioning task, you can use the list compute-node command to check the provisioning state of the servers. You can also monitor the log file for information relating to provisioning tasks. The location of the log file can be obtained by checking the Log_File parameter when you run the show system-properties command. See Example 4.73, “Show System Properties” for more information.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
The command target to perform the reprovision operation against. |
|
Skip the HMP step in the provisioning process in order to save the local storage repository. |
|
Return the output of the command in JSON format. |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
Do not force reprovisioning on a compute node with running virtual machines because they will be left in an indeterminate state.
PCA> reprovision compute-node ovcacn11r1 The reprovision job has been submitted. Use "show compute-node <compute node name>" to monitor the progress. Status: Success
4.2.42 rerun
Triggers a configuration task to re-run on the Oracle Private Cloud Appliance.
Syntax
rerun
{
config-task
}
id
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where id
is the identifier for the
configuration task that must be re-run.
Description
Use the rerun command to re-initiate a configuration task that has failed. Use the list config-error command to view the configuration tasks that have failed and the associated identifier that you should use in conjunction with this command. See Example 4.49, “List All Configuration Errors” for more information.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
The command target to perform the rerun operation against. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> rerun config-task 84 Status: Success
4.2.43 set system-property
Sets the value for a system property on the Oracle Private Cloud Appliance.
Syntax
set system-property
{
ftp_proxy
|
http_proxy
|
https_proxy
|
log_count
|
log_file
|
log_level
|
log_size
|
timezone
}
value
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where value
is the value for the system
property that you are setting.
Description
Use the set system-property command to set the value for a system property on the Oracle Private Cloud Appliance.
The set system-property command only affects the settings for the management node where it is run. If you change a setting on the active management node, using this command, you should connect to the passive management node and run the equivalent command there as well, to keep the two systems synchronized. This is the only exception where it is necessary to run a CLI command on the passive management node.
You can use the show system-properties command to view the values of various system properties at any point. See Example 4.73, “Show System Properties” for more information.
Changes to system-properties usually require that you restart the service for the change to take effect. To do this, you must run service ovca restart in the shell of the active management node after you have set the system property value.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Set the value for the IP address of an FTP Proxy |
|
Set the value for the IP address of an HTTP Proxy |
|
Set the value for the IP address of an HTTPS Proxy |
|
Set the value for the number of log files that should be retained through log rotation |
|
Set the value for the location of a particular log file. Caution Make sure that the new path to the log file exists. Otherwise, the log server stops working.
The system always prepends
This property can be defined separately for the following log files: backup, cli, diagnosis, monitor, ovca, snmp, and syncservice. |
|
Set the value for the log level output. Accepted log levels are: CRITICAL, DEBUG, ERROR, INFO, WARNING. This property can be defined separately for the following log files: backup, cli, diagnosis, monitor, ovca, snmp, and syncservice. Use tab completion to insert the log file in the command before the log level value. |
|
Set the value for the maximum log size before a log is rotated |
|
Set the time zone for the location of the Oracle Private Cloud Appliance. There are several hundred options, and the selection is case sensitive. It is suggested to use tab completion to find the most accurate setting for your location. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> set system-property log_file syncservice sync/ovca-sync.log
Status: Success
PCA> show system-properties
----------------------------------------
[...]
Backup.Log_File /var/log/ovca-backup.log
Backup.Log_Level DEBUG
Cli.Log_File /var/log/ovca-cli.log
Cli.Log_Level DEBUG
Sync.Log_File /var/log/sync/ovca-sync.log
Sync.Log_Level DEBUG
Diagnosis.Log_File /var/log/ovca-diagnosis.log
Diagnosis.Log_Level DEBUG
[...]
----------------------------------------
Status: Success
Log configuration through the CLI is described in more detail in Section 7.1, “Setting the Oracle Private Cloud Appliance Logging Parameters”.
PCA> set system-property http_proxy http://10.1.1.11:8080 Status: Success PCA> set system-property http_proxy '' Status: Success
Proxy configuration through the CLI is described in more detail in Section 7.2, “Adding Proxy Settings for Oracle Private Cloud Appliance Updates”.
PCA> set system-property timezone US/Eastern Status: Success
4.2.44 set kube-dns
Configures the DNS information for a static network.
The set kube-dns
software command is no longer supported. Kubernetes
functions are now available through Oracle Cloud Native
Environment.
Syntax
set kube-dns
cluster-name
name-servers
search-domains
where cluster-name
is the name of the cluster
where you wish to configure external network settings.
Description
Use the set kube-dns command to set the DNS name servers and search domains.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Specify the domain name server address. If you use more than one domain name server, use a comma to separate the addresses. |
|
Specifiy one or more search domains. DNS searches require a fully qualified domain name. Listing your often-used domains in the search domains field lets you search just a machine name, without using the fully-qualified domain name. |
Examples
PCA> set kube-dnsMyCluster
8.8.8.8,9.9.9.9
demo.org,demo.com
Status: Success
4.2.45 set kube-load-balancer
Sets the VRRP ID parameter for the Kubernetes load balancer. Use this setting to avoid VRRP conflicts on your network.
The set kube-load-balancer
software command is no longer supported.
Kubernetes functions are now available through Oracle Cloud Native
Environment.
Syntax
set kube-load-balancer
cluster-name
VRRP_ID
where cluster-name
is the name of the Kubernetes
cluster where you set the load balancer VRRP ID.
Description
Use the set kube-load-balancer command to manually set the VRRP ID on your cluster when other systems in your network use VRRP.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Generally, the VRRP address is auto-selected during
the |
Examples
PCA> create kube-load-balancerMyCluster
232
Status: Success
4.2.46 set kube-master-pool
Configures the host names for the Kubernetes master nodes, these must be resolveable names on the external network.
The set kube-master-pool
software command is no longer supported.
Kubernetes functions are now available through Oracle Cloud Native
Environment.
Syntax
set kube-master-pool
cluster-name
primary-hostname,ipv4address
host-name
host-name
where cluster-name
is the name of the cluster
where you wish to configure host names for the master nodes.
Description
Use the set kube-master-pool create a list of valid host names for the master nodes in the cluster.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
The first host name must have an IP address associated with it. This command must be run if the external network is static, it is an invalid command if the external network is DHCP. |
|
Specifiy one or more additional host names for the master nodes, no IPv4 addresses required for additional hosts. |
Examples
PCA> set kube-master-poolMyCluster
Master_host1,192.168.0.20
MasterHost2
MasterHost3
Status: Success
4.2.47 set kube-network
Configures the external network for either DHCP or static IP addressing.
The set kube-network
software command is no longer supported. Kubernetes
functions are now available through Oracle Cloud Native
Environment.
Syntax
set kube-network
cluster-name
DHCP | static
netmask
gateway
where cluster-name
is the name of the cluster
where you wish to configure external network settings.
Description
Use the set kube-network command to set up either DHCP or static IP addressing for the selected cluster.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Choose the either DHCP or static IP addressing for the selected cluster. If you choose static, you must provide the netmask and gateway information. For static networks, you must also set this information:
|
|
Netmask for the interface. |
|
IP address for the gateway. |
Examples
PCA> set kube-networkMyCluster
dhcp
Status: Success
PCA> set kube-networkMyCluster
static
255.255.255.0
192.168.0.1
Status: Success
4.2.48 set kube-vm-shape
Changes the profile of the virtual machines that are part of the default node pool for masters or workers.
The set kube-vm-shape
software command is no longer supported.
Kubernetes functions are now available through Oracle Cloud Native
Environment.
Syntax
set kube-vm-shape
cluster-name
master | worker
cpus
memory
where cluster-name
is the name of the cluster
where you wish change the virtual machine profile.
Description
Use the set kube-vm-shape to optionally set the virtual machine shapes for either the master or worker nodes in a cluster.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Choose which virtual machine shape to customize, the master or worker shape. |
|
Master nodes can have between 4 and 24 CPUs. The default is 8 CPUs. Worker nodes can have between 1 and 14 CPUs. The default is 4 CPUs. |
|
Master nodes can have between 16 and 393 GB of memory, if available. The default is 32 GB. Worker nodes can have between 8 and 393 GB of memory, if available. The default is 16 GB. |
Examples
PCA> set kube-vm-shapeMyCluster
master4
16384
Status: Success
PCA> set kube-vm-shapeMyCluster
worker16
64000
Status: Success
4.2.49 set kube-worker-pool
Resizes the Kubernetes cluster worker pool.
The set kube-worker-pool
software command is no longer supported.
Kubernetes functions are now available through Oracle Cloud Native
Environment.
Syntax
set kube-worker-pool
cluster-name
quantity
[
|
]
host-name
host-name
where cluster-name
is the name of the cluster
where you wish to resize the worker pool.
Description
Use the set kube-worker-pool to change the size of a cluster worker pool.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
If the external network is DHCP, then the quantity of workers required in the worker pool may be specified, instead of specifying the list of host names. A quantity of 0 is valid for either DHCP and static networks. While quantity is not required for static cluster, if specified, it must be set to 0, to allow no workers to be created. |
|
For static networks, the list of host names is required and the cluster configuration is invalid without them. Specifiy the host names for the worker nodes in a static network. |
Examples
PCA> set kube-worker-poolMyCluster
WorkerHost1
WorkerHost2
WorkerHost3
Status: Success
PCA> set kube-worker-poolMyCluster
2
Status: Success
4.2.50 show
The show command can be used to view information about particular objects such as tasks, rack layout or system properties. Unlike the list command, which applies to a whole target object type, the show command displays information specific to a particular target object. Therefore, it is usually run by specifying the command, the target object type and the object identifier.
Syntax
show
{
cloud-wwpn
|
compute-node
|
iscsi-storage
|
iscsi-storage-profile
|
kube-cluster
|
network
|
node-pool
|
node-pool-node
|
nfs-storage
|
nfs-storage-profile
|
oci-backup
|
oci-target
|
rack-layout
|
rack-type
|
server-profile
|
storage-network
|
system-properties
|
task
|
tenant-group
|
version
|
vhba-info
}
object
[
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
Where object
is the identifier for the target
object that you wish to show information for. The following
table provides a mapping of identifiers that should be
substituted for object
, depending on the
command target.
Command Target |
Object Identifier |
---|---|
cloud-wwpn (InfiniBand-based systems only) |
Storage Network/Cloud Name |
compute-node |
Compute Node Name |
iscsi-storage (Ethernet-based systems only) |
iSCSI LUN Name |
iscsi-storage-profile (Ethernet-based systems only) |
Storage Profile Name |
kube-cluster (Ethernet-based systems only) |
Kubernetes Cluster Name Caution This option is no longer supported. |
network |
Network Name |
nfs-storage (Ethernet-based systems only) |
NFS Share Name |
nfs-storage-profile (Ethernet-based systems only) |
NFS Storage Profile Name |
node-pool (Ethernet-based systems only) |
Node Pool Name Caution This option is no longer supported. |
node-pool-node (Ethernet-based systems only) |
Node Pool Node Name Caution
This option is no longer supported. |
oci-backup (Ethernet-based systems only) |
Oracle Cloud Infrastructure Backup Name |
oci-target (Ethernet-based systems only) |
Oracle Cloud Infrastructure Target Name |
rack-layout |
Rack Architecture or Type |
rack-type |
(none) |
server-profile (InfiniBand-based systems only) |
Server Name |
storage-network |
Storage Network/Cloud Name |
system-properties |
(none) |
task |
Task ID |
tenant-group |
Tenant Group Name |
version |
(none) |
vhba-info (InfiniBand-based systems only) |
Compute Node Name |
Note that you can use tab completion to help you correctly
specify the object
for the different command
targets. You do not need to specify an object
if the command target is system-properties
or
version
.
Description
Use the show command to view information
specific to a particular target object, identified by specifying
the identifier for the object that you wish to view. The
exception to this is the option to view
system-properties
, for which no identifier is
required.
Frequently, the show command may display information that is not available using the list command in conjunction with its filtering capabilities.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
The command target to show information for. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
This command only displays the system properties for the management node where it is run. If the system properties have become unsynchronized across the two management nodes, the information reflected by this command may not apply to both systems. You can run this command on either the active or passive management node if you need to check that the configurations match.
PCA> show system-properties ---------------------------------------- HTTP_Proxy None HTTPS_Proxy None FTP_Proxy None Log_File /var/log/ovca.log Log_Level DEBUG Log_Size (MB) 250 Log_Count 5 Timezone Etc/UTC Backup.Log_File /var/log/ovca-backup.log Backup.Log_Level DEBUG Cli.Log_File /var/log/ovca-cli.log Cli.Log_Level DEBUG Sync.Log_File /var/log/ovca-sync.log Sync.Log_Level DEBUG Diagnosis.Log_File /var/log/ovca-diagnosis.log Diagnosis.Log_Level DEBUG Monitor.Log_File /var/log/ovca-monitor.log Monitor.Log_Level INFO Snmp.Log_File /nfs/shared_storage/logs/ovca_snmptrapd.log Snmp.Log_Level DEBUG ---------------------------------------- Status: Success
PCA> show task 341e7bc74f339c ---------------------------------------- Task_Name backup Status RUNNING Progress 70 Start_Time 05-27-2019 09:59:36 End_Time None Pid 1503341 Result None ---------------------------------------- Status: Success
PCA> show rack-layout x8-2_base RU Name Role Type Sub_Type Units -- ---- ---- ---- -------- ----- 42 ovcacn42r1 compute compute [42] 41 ovcacn41r1 compute compute [41] 40 ovcacn40r1 compute compute [40] 39 ovcacn39r1 compute compute [39] 38 ovcacn38r1 compute compute [38] 37 ovcacn37r1 compute compute [37] 36 ovcacn36r1 compute compute [36] 35 ovcacn35r1 compute compute [35] 34 ovcacn34r1 compute compute [34] 33 ovcacn33r1 compute compute [33] 32 ovcacn32r1 compute compute [32] 31 ovcacn31r1 compute compute [31] 30 ovcacn30r1 compute compute [30] 29 ovcacn29r1 compute compute [29] 28 ovcacn28r1 compute compute [28] 27 ovcacn27r1 compute compute [27] 26 ovcacn26r1 compute compute [26] 25 N / A infrastructure filler [25, 24] 24 N / A infrastructure filler [25, 24] 23 ovcasw23r1 infrastructure cisco-data cisco4 [23] 22 ovcasw22r1 infrastructure cisco-data cisco3 [22] 21 ovcasw21r1 infrastructure cisco [21] 20 N / A infrastructure zfs-storage disk-shelf [20, 19, 18, 17] 19 N / A infrastructure zfs-storage disk-shelf [20, 19, 18, 17] 18 N / A infrastructure zfs-storage disk-shelf [20, 19, 18, 17] 17 N / A infrastructure zfs-storage disk-shelf [20, 19, 18, 17] 16 ovcasw16r1 infrastructure cisco-data cisco2 [16] 15 ovcasw15r1 infrastructure cisco-data cisco1 [15] 14 ovcacn14r1 compute compute [14] 13 ovcacn13r1 compute compute [13] 12 ovcacn12r1 compute compute [12] 11 ovcacn11r1 compute compute [11] 10 ovcacn10r1 compute compute [10] 9 ovcacn09r1 compute compute [9] 8 ovcacn08r1 compute compute [8] 7 ovcacn07r1 compute compute [7] 6 ovcamn06r1 infrastructure management management2 [6] 5 ovcamn05r1 infrastructure management management1 [5] 4 ovcasn02r1 infrastructure zfs-storage zfs-head2 [4, 3] 3 ovcasn02r1 infrastructure zfs-storage zfs-head2 [4, 3] 2 ovcasn01r1 infrastructure zfs-storage zfs-head1 [2, 1] 1 ovcasn01r1 infrastructure zfs-storage zfs-head1 [2, 1] 0 ovcapduBr1 infrastructure pdu pdu2 [0] 0 ovcapduAr1 infrastructure pdu pdu1 [0] ----------------- 44 rows displayed Status: Success
PCA> show network default_external ---------------------------------------- Network_Name default_external Trunkmode None Description None Ports ['5:1', '5:2'] vNICs None Status ready Network_Type external_network Compute_Nodes ovcacn12r1, ovcacn07r1, ovcacn13r1, ovcacn14r1, ovcacn10r1, ovcacn09r1, ovcacn11r1 Prefix 192.168.200.0/21 Netmask None Route_Destination None Route_Gateway None ---------------------------------------- Status: Success
PCA> show tenant-group myTenantGroup
----------------------------------------
Name myTenantGroup
Default False
Tenant_Group_ID 0004fb0000020000155c15e268857a78
Servers ['ovcacn09r1', 'ovcacn10r1']
State ready
Tenant_Group_VIP None
Tenant_Networks ['myPublicNetwork']
Pool_Filesystem_ID 3600144f0d29d4c86000057162ecc0001
----------------------------------------
Status: Success
PCA> show network myHostNetwork
----------------------------------------
Network_Name myHostNetwork
Trunkmode None
Description None
Ports ['1', '2']
vNICs None
Status ready
Network_Type host_network
Compute_Nodes ovcacn42r1, ovcacn01r2, ovcacn02r2
Prefix 10.10.10
Netmask 255.255.240.0
Route_Destination 10.10.20.0/24
Route_Gateway 10.10.10.250
----------------------------------------
Status: Success
PCA> show cloud-wwpn Cloud_A ---------------------------------------- Cloud_Name Cloud_A WWPN_List 50:01:39:70:00:58:91:1C, 50:01:39:70:00:58:91:1A, 50:01:39:70:00:58:91:18, 50:01:39:70:00:58:91:16, 50:01:39:70:00:58:91:14, 50:01:39:70:00:58:91:12, 50:01:39:70:00:58:91:10, 50:01:39:70:00:58:91:0E, 50:01:39:70:00:58:91:0C, 50:01:39:70:00:58:91:0A, 50:01:39:70:00:58:91:08, 50:01:39:70:00:58:91:06, 50:01:39:70:00:58:91:04, 50:01:39:70:00:58:91:02, 50:01:39:70:00:58:91:00 ---------------------------------------- Status: Success
PCA> show vhba-info ovcacn10r1 vHBA_Name Cloud WWNN WWPN ------------- ----------- ------------- ------------- vhba03 Cloud_C 50:01:39:71:00:58:B1:04 50:01:39:70:00:58:B1:04 vhba02 Cloud_B 50:01:39:71:00:58:91:05 50:01:39:70:00:58:91:05 vhba01 Cloud_A 50:01:39:71:00:58:91:04 50:01:39:70:00:58:91:04 vhba04 Cloud_D 50:01:39:71:00:58:B1:05 50:01:39:70:00:58:B1:05 ---------------- 4 rows displayed Status: Success
PCA> show version ---------------------------------------- Version 2.4.1 Build 819 Date 2019-06-20 ---------------------------------------- Status: Success
PCA> show kube-cluster MyCluster
----------------------------------------
Cluster MyCluster
Tenant_Group Rack1_ServerPool
State CONFIGURED
Sub_State VALID
Ops_Required None
Load_Balancer 100.80.111.129
Vrrp_ID 15
External_Network vm_public_vlan
Cluster_Network_Type dhcp
Gateway None
Netmask None
Name_Servers None
Search_Domains None
Repository Rack1-Repository
Assembly PCA_K8s_va.ova
Masters 3
Workers 3
Cluster_Start_Time None
Cluster_Stop_Time None
Job_ID None
Error_Code None
Error_Message None
----------------------------------------
Status: Success
4.2.51 start
Starts up a rack component.
The start
command is deprecated. It will be
removed in the next release of the Oracle Private Cloud Appliance Controller
Software.
Syntax
start
{
compute-node
CN
|
management-node
MN
} [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where CN
refers to the name of the compute node
and MN
refers to the name of the management
node to be started.
Description
Use the start command to boot a compute node or management node. You must provide the host name of the server you wish to start.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Start either a compute node or a management node. Replace CN or MN respectively with the host name of the server to be started. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> start compute-node ovcacn11r1 Status: Success
4.2.52 start kube-cluster
Builds a Kubernetes cluster from a cluster definition created using Section 4.2.14, “create kube-cluster”. Depending on the size of the cluster definition, this process can take from 30 minutes to hours.
The start kube-cluster
software command is no longer supported.
Kubernetes functions are now available through Oracle Cloud Native
Environment.
Syntax
start kube-cluster
cluster-name
where cluster-name
refers to the name of the
cluster to be started.
Description
Use the start kube-cluster command to submit
the Kubernetes cluster definition to be started through an
asynchronous job. Progress can be viewed through the
show kube-cluster
or list
kube-cluster
commands.
States
The following table shows the available states for this command.
Note these are the Kubernetes cluster states,
not the Oracle VM Kubernetes virtual machine states (stopped, suspended,
etc.). View the states using the show
kube-cluster
command while the custer is starting, or
with the list kube-cluster
command.
State |
Substate |
Description |
---|---|---|
|
|
The cluster is valid. |
|
The cluser is invalid and cannot be started. |
|
|
|
Awaiting resources to start building. |
|
|
Building the network. |
|
Buildin the virtual machines for the control plane. |
|
|
Applying the loadbalancer changes. |
|
|
Joining the control plane. |
|
|
Building the workers. |
|
|
|
Stopping and removing the master VMs. |
|
Stopping and removing the network. |
|
|
|
Stopping VMs in a node pool:
|
|
Stopping the network. |
|
|
|
The cluster has finished the build process. |
|
Error occurred during build of the worker nodes. |
|
|
Cluster build is clear. | |
|
|
The cluster was fully torn down. |
|
The cluster needs to be stopped and likely have manual intervention. |
|
|
Examples
PCA> start kube-cluster MyCluster
Status: Success
4.2.53 stop
Shuts down a rack component or aborts a running task.
The stop
commands to shut down rack
components are deprecated. It will be removed in the next
release of the Oracle Private Cloud Appliance Controller Software.
The other stop
commands, to abort tasks,
remain functional.
Syntax
stop
{
compute-node
CN
|
management-node
MN
|
task
id
|
update-task
id
} [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where CN
or MN
refers to the
name of the server to be shut down, and id
refers to the identifier of the task to be aborted.
Description
Use the stop command to shut down a compute
node or management node or to abort a running task. Depending on
the command target you must provide either the host name of the
server you wish to shut down, or the unique identifier of the
task you wish to abort. This is a destructive operation and you
are prompted to confirm whether or not you wish to continue,
unless you use the --confirm
flag to override
the prompt.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Shut down either a compute node or a management node. Replace CN or MN respectively with the host name of the server to be shut down. Caution These options are deprecated. |
|
Aborts a running task.
Use the Caution Stopping an update task is a risky operation and should be used with extreme caution. |
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> stop task 341d45b5424c16 ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Status: Success
4.2.54 stop kube-cluster
Stops a Kubernetes cluster.
The stop kube-cluster
software command is no longer supported.
Kubernetes functions are now available through Oracle Cloud Native
Environment.
Syntax
stop kube-cluster
cluster-name
where cluster-name
refers to the name of the
cluster to be stopped.
Description
Use the stop kube-cluster command to stop an
available Kubernetes cluster through an asynchronous job. Progress
can be viewed through the show kube-cluster
or list kube-cluster
commands.
States
The following table shows the available states for this command.
View the states using the show kube-cluster
command while the custer is starting.
Cluster Substate |
Description |
---|---|
AVAILABLE or ERROR
|
|
SUBMITTED
|
Status of network configuation.
Possible states are: |
QUEUED
|
|
STOPPING
|
|
CONFIGURED
|
|
VALID
|
|
Examples
PCA> stop kube-cluster MyCluster
Status: Success
4.2.55 update appliance
This command is deprecated. Its functionality is part of the Oracle Private Cloud Appliance Upgrader.
Release 2.4.1 is for factory installation only. It cannot be used for field updates or upgrade operations on existing appliance environments.
4.2.56 update password
Modifies the password for one or more components within the Oracle Private Cloud Appliance.
Syntax
update password
{
LeafSwitch-admin
|
MgmtNetSwitch-admin
|
SpineSwitch-admin
|
mgmt-root
|
mysql-appfw
|
mysql-ovs
|
mysql-root
|
ovm-admin
|
spCn-root
|
spMn-root
|
spZfs-root
|
system-root
|
wls-weblogic
|
zfs-root
} [
PCA-password
target-password
] [
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where PCA-password
is the current password of
the Oracle Private Cloud Appliance admin user, and
target-password
is the new password to be
applied to the target rack component.
Description
Use the update password command to modify the
password for one or more components within the Oracle Private Cloud Appliance.
This is a destructive operation and you are prompted to confirm
whether or not you wish to continue, unless you use the
--confirm
flag to override the prompt.
Optionally you provide the current Oracle Private Cloud Appliance password and the new target component password with the command. If not, you are prompted for the current password of the Oracle Private Cloud Appliance admin user and for the new password that should be applied to the target.
Password changes are not instantaneous across the appliance, but are propagated through a task queue. When applying a password change, allow at least 30 minutes for the change to take effect. Do not attempt any further password changes during this delay. Verify that the password change has been applied correctly.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Sets a new password for the |
|
Sets a new password for the |
|
Sets a new password for the |
|
Sets a new password for the |
|
Sets a new password for the
The |
|
Sets a new password for the
The |
|
Sets a new password for the
The |
|
Sets a new password for the |
|
Sets a new password for the |
|
Sets a new password for the |
|
Sets a new password for the |
|
Sets a new password for the |
|
Sets a new password for the
The |
|
Sets a new password for the |
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> update password ovm-admin ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Current PCA Password: New ovm-admin Password: Confirm New ovm-admin Password: Status: Success
4.2.57 update compute-node
Updates the Oracle Private Cloud Appliance compute nodes to the Oracle VM Server version included in the Oracle Private Cloud Appliance ISO image.
Syntax
update compute-node
{
node
} [
--confirm
] [
--force
] [
--json
] [
--less
] [
--more
] [
--tee=OUTPUTFILENAME
]
where node
is the identifier of the compute
node that must be updated with the Oracle VM Server version provided
as part of the appliance software ISO image. Run this command
for one compute node at a time.
Running the update compute-node command with multiple node arguments is not supported. Neither is running the command concurrently in separate terminal windows.
Description
Use the update compute-node command to
install the new Oracle VM Server version on the selected compute
node or compute nodes. This is a destructive operation and you
are prompted to confirm whether or not you wish to continue,
unless you use the --confirm
flag to override
the prompt.
Options
The following table shows the available options for this command.
Option |
Description |
---|---|
|
Confirm flag for destructive command. Use this flag to disable the confirmation prompt when you run this command. |
|
Force the command to be executed even if the target is in an invalid state. This option is not risk-free and should only be used as a last resort. |
|
Return the output of the command in JSON format |
|
Return the output of the command one screen at a time for easy viewing, as with the less command on the Linux command line. This option allows both forward and backward navigation through the command output. |
|
Return the output of the command one screen at a time for easy viewing, as with the more command on the Linux command line. This option allows forward navigation only. |
|
When returning the output of the command, also write it to the specified output file. |
Examples
PCA> update compute-node ovcacn10r1 ************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Status: Success