Go to main content

Oracle Solaris Cluster 4.3 Reference Manual

Exit Print View

Updated: September 2015
 
 

clzonecluster (1CL)

Name

clzonecluster, clzc - create and manage zone clusters

Synopsis

/usr/cluster/bin/clzonecluster [subcommand] -?
/usr/cluster/bin/clzonecluster -V
/usr/cluster/bin/clzonecluster subcommand [options] -v 
 [zone-cluster-name]
/usr/cluster/bin/clzonecluster apply [-n node-name[,…]] [-d] 
 {+ | zone-cluster-name […]}
/usr/cluster/bin/clzonecluster boot [-n node-name[,…]] [-o] 
 {+ | zone-cluster-name […]}
/usr/cluster/bin/clzonecluster clone -Z target-zone-cluster-name 
 [-m method][-n node-name[,…]] {source-zone-cluster-name}
/usr/cluster/bin/clzonecluster configure [-f command-file] 
 zone-cluster-name
/usr/cluster/bin/clzonecluster delete [-F] zone-cluster-name
/usr/cluster/bin/clzonecluster export [-f command-file] 
 zone-cluster-name
/usr/cluster/bin/clzonecluster halt [-n node-name[,…]] 
 {+ | zone-cluster-name}
/usr/cluster/bin/clzonecluster install [-c config_profile.xml] 
 [-M manifest.xml] zone-cluste-rname
/usr/cluster/bin/clzonecluster install [-n node-name] 
 -a absolute_path_to_archive [-x cert|ca-cert|key=file]… 
 -z zone zone-cluster-name
/usr/cluster/bin/clzonecluster install [-n node-name] 
 -d absolute_root_path zone-cluster-name
/usr/cluster/bin/clzonecluster install-cluster 
 [-d dvd-image] [-n node-name[,…]] 
 [-p patchdir=patch-dir[,patchlistfile=file-name]] 
 -s software-component[,…]] [-v] zone-cluster-name
/usr/cluster/bin/clzonecluster install-cluster 
 [-p patchdir=patch-dir[,patchlistfile=file-name]
 [-n node-name[,…]] [-v] zone-cluster-name
/usr/cluster/bin/clzonecluster list [+ | zone-cluster-name […]]
/usr/cluster/bin/clzonecluster move -f zone-path zone-cluster-name
/usr/cluster/bin/clzonecluster ready [-n node-name[,…]] 
 {+ | zone-cluster-name […]}
/usr/cluster/bin/clzonecluster reboot [-n node-name[,…]] [-o] 
 {+ | zone-cluster-name […]}
/usr/cluster/bin/clzonecluster set {-p name=value} 
 [-p name=value] […] [zone-cluster-name]
/usr/cluster/bin/clzonecluster show [+ | zone-cluster-name […]]
/usr/cluster/bin/clzonecluster show-rev [-v] [-n node-name[,…]]
 [+ | zone-cluster-name …]
/usr/cluster/bin/clzonecluster status [+ | zone-cluster-name […]]
/usr/cluster/bin/clzonecluster uninstall [-F] [-n node-name
 [,…]] zone-cluster-name
/usr/cluster/bin/clzonecluster verify [-n node-name[,…]] 
 {+ | zone-cluster-name […]}

Description

The clzonecluster command creates and modifies zone clusters for Oracle Solaris Cluster configurations. The clzc command is the short form of theclzonecluster command; the commands are identical. The clzonecluster command is cluster-aware and supports a single source of administration. You can issue all forms of the command from one node to affect a single zone-cluster node or all nodes.

You can omit subcommand only if options is the –? option or the –V option.

The subcommands require at least one operand, except for the list, show, and status subcommands. However, many subcommands accept the plus sign operand (+) to apply the subcommand to all applicable objects. The clzonecluster commands can be run on any node of a zone cluster and can affect any or all of the zone cluster.

Each option has a long and a short form. Both forms of each option are given with the description of the option in OPTIONS.


Note -  You cannot change the zone cluster name after the zone cluster is created.

Sub Commands

The following subcommands are supported:

apply

Applies configuration changes to the zone cluster.

The apply subcommand accommodates persistent live reconfiguration of zone clusters. You should run clzonecluster configure to make configuration changes, and then run the apply subcommand to apply the changes to the specific zone clusters. The apply subcommand uses the –n option to specify a list of nodes where the reconfiguration will be applied.

You can use the apply subcommand only from a global-cluster node.

boot

Boots the zone cluster.

The boot subcommand boots the zone cluster. The boot subcommand uses the –n flag to boot the zone cluster for a specified list of nodes.

You can use the boot subcommand only from a global-cluster node.

clone

Clones the zone cluster.

The clone command installs a zone cluster by copying an existing installed zone cluster. This subcommand is an alternative to installing a zone cluster. The clone subcommand does not itself create the new zone cluster. Ensure that the source zone cluster used for cloning is in the Installed state (not running) before you clone it. You must first use the configure subcommand to create the new zone cluster. Then use the clone subcommand to apply the cloned configuration to the new zone cluster.

You can use the clone subcommand only from a global-cluster node.

configure

Launches an interactive utility to configure a solaris10 or labeled brand zone cluster.

The configure subcommand uses the zonecfg command to configure a zone on each specified machine. The configure subcommand lets you specify properties that apply to each node of the zone cluster. These properties have the same meaning as established by the zonecfg command for individual zones. The configure subcommand supports the configuration of properties that are unknown to the zonecfg command. The configure subcommand launches an interactive shell if you do not specify the –f option. The –f option takes a command file as its argument. The configure subcommand uses this file to create or modify zone clusters non-interactively

The configure subcommand also lets you configure a zone cluster using the Unified Archives, choosing a recovery archive or a clone archive. Use the –a archive option with the create subcommand. For example:

# clzonecluster configure sczone1
sczone1: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:sczone1> create -a archive -z archived zone

You can use the configure subcommand only from a global-cluster node. For more information, see Oracle Solaris Cluster 4.3 Software Installation Guide .

To specify a solaris10 brand zone cluster, you can use a default template when you configure the zone cluster. The default template is located at /etc/cluster/zone_cluster/ORCLcls10default.xml. You can use the –t option to specify the default solaris10 zone cluster template, or another existing solaris10 zone cluster on the cluster. If another solaris10 zone cluster is specified, the zone cluster configuration is imported from the specified zone cluster. You must also specify the root password in the sysid property, so that the verify or commit operations do not fail. Type the following commands to apply the template:

# clzonecluster configure sczone2
sczone2: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:sczone2> create -t ORCLcls10default
clzc:sczone2> info
zonename: sczone2
zonepath:
autoboot: true
hostid:
brand: solaris10

Both the interactive and noninteractive forms of the configure command support several subcommands to edit the zone cluster configuration. See zonecfg(1M) for a list of available configuration subcommands.

The interactive configure utility enables you to create and modify the configuration of a zone cluster. Zone-cluster configuration consists of a number of resource types and properties. The configure utility uses the concept of scope to determine where the subcommand applies. There are three levels of scope that are used by the configure utility: cluster, resource, and node-specific resource. The default scope is cluster. The following list describes the three levels of scope:

  • Cluster scope - Properties that affect the entire zone cluster. If the zoneclustername is sczone, the interactive shell of the clzonecluster command looks similar to the following:

    clzc:sczone>
  • Node scope - A special resource scope that is nested inside the node resource scope. Settings inside the node scope affect a specific node in the zone cluster. For example, you can add a net resource to a specific node in the zone cluster. The interactive shell of the clzonecluster command looks similar to the following:

    clzc:sczone:node:net>
  • Resource scope - Properties that apply to one specific resource. A resource scope prompt has the name of the resource type appended. For example, the interactive shell of the clzonecluster command looks similar to the following:

    clzc:sczone:net>
delete

Removes a specific zone cluster.

This subcommand deletes a specific zone cluster. When you use a wild card operand (*), the delete command removes the zone clusters that are configured on the global cluster. The zone cluster must be in the configured state before you run the delete subcommand. Using the –F option with the delete command attempts to delete the zone cluster no matter what state it is in.

You can use the delete subcommand only from a global-cluster node.

export

Exports the zone cluster configuration into a command file.

The exported commandfile can be used as the input for the configure subcommand. Modify the file as needed to reflect the configuration that you want to create. See the clconfiguration(5CL) man page for more information.

You can use the export subcommand only from a global-cluster node.

halt

Stops a zone cluster or a specific node on the zone cluster.

When you specify a specific zone cluster, the halt subcommand applies only to that specific zone cluster. You can halt the entire zone cluster or just halt specific nodes of a zone cluster. If you do not specify a zone cluster, the halt subcommand applies to all zone clusters. You can also halt all zone clusters on specified machines.

The halt subcommand uses the –n option to halt zone clusters on specific nodes. By default, the halt subcommand stops all zone clusters on all nodes. If you specify the + operand in place of a zone name, all the zone clusters are stopped.

You can use the halt subcommand only from a global-cluster node.

import-zone

Imports an existing installed Oracle Solaris zone into the zone-cluster configuration.

You can run the import-zone command in the interactive and non-interactive mode.

You can run the import-zone command only in node scope. You have to set the zonepath, ip-type, and the brand properties in the global scope, and the physical-host property in the node scope before you run the import-zone command.

When you run the import-zone command, it looks for the zone that is specified as zonename in the node that is specified as physical-host in the node scope and imports it into the zone-cluster configuration.

The import-zone command validates the zone-cluster's zonepath, ip-type, and brand properties against the zone's respective properties to ensure that they are identical which is necessary for a successful import. Additionally, the zone being imported must be in the installed state.

For example, to run the import-zone command in the non-interactive mode:

create
set zonepath=/zones/zc1
add node
set physical-host=phys-host1
import-zone -y zonename=zone1
set hostname=zc-host1
end
commit
exit

Note -  In the non-interactive mode, use the –y option to rename the zone forcefully.

To run the import-zone command in the interactive mode:

create
add node
set physical-host=phys-host3
import-zone zonename=zone1
This operation renames the zone to the zone-cluster's zonename.Do you want to proceed (Y/N)
Y
set hostname=zc-host3
end
commit
exit
install

Installs a zone cluster.

This subcommand installs a zone cluster.

If you use the install -M manifest.xml option, the manifest you specify is used for installation on all nodes of the zone cluster. A manifest file describes solaris package information that the administrator requires for installation, such as the certificate_file, key_file, publisher, and any additional packages. The manifest.xml file must also specify the Oracle Solaris Cluster group package ha-cluster-full, ha-cluster-framework-full, ha-cluster-data-services-full, or ha-cluster-minimal for a zone cluster installation. For more information about the Automated Installer manifest, see Creating a Custom AI Manifest in Installing Oracle Solaris 11.3 Systems .

If you do not use the –M option (which is the default), the Automated Installer manifest at /usr/share/auto_install/manifest/zone_default.xml is used for the installation. When this zone_default.xml manifest is used, all of the ha-cluster/* packages that are installed in the global zone of the issuing zone-cluster node are installed in all nodes of the zone cluster. If you use a custom manifest when installing the zone cluster and do not specify an Oracle Solaris Cluster group package, the installation fails.

The underlying global zones of all zone-cluster nodes that you want to install must have the identical set of Oracle Solaris Cluster packages installed as are in the global zone of the zone-cluster node that issues the install subcommand. Zone-cluster installation might fail on any zone-cluster node that does not meet this requirement.

You can use the install subcommand only from a global-cluster node. The –M and –c options can be used only for solaris and labeled brand zone clusters.

If the brand of the zone cluster is solaris10, you must use the –a or –d option.

–a archive

The absolute path of the unified archive for the solaris or solaris10 brand zone clusters, flar archive location for solaris10 brand zone clusters, or the Oracle Solaris 10 image archive that you want to use for the installation. See the solaris10(5) man page for details regarding supported archive types. The absolute path of the archive should be accessible on all the physical nodes of the cluster where the zone cluster will be installed. The unified archive installation can use a recovery archive or a clone archive.

–d path

The path to the root directory of an installed Oracle Solaris 10 system. The path should be accessible on all the physical nodes of the cluster where the zone cluster will be installed.

[–x cert|ca-cert|key=file]…

If you have an HTTPS unified archive location, specify the SSL certificate, Certificate Authority (CA) certificate, and key files. You can specify the –x option multiple times.

–z zone

If the unified archive contains multiple zones, specify the zone name of the source of the configuration or installation.

The same archive or installed Oracle Solaris 10 system will be used as a source for installation of all the solaris10 brand zones in the zone cluster. The installation will override the system identification parameters in the source archive or installed Oracle Solaris 10 system with the system identification parameters specified in the sysid resource type during zone cluster configuration.

install-cluster

The install-cluster subcommand installs the Oracle Solaris Cluster software in a solaris or a solaris10 brand zone-cluster node. The software that is installed includes the core packages, cluster software components (such as agents that are supported in the zone cluster and the Geographic Edition software). It includes patches as well when the software is installed in a solaris10 brand of zone-cluster. In the case of the solaris10 brand of zone-cluster, only the Oracle Solaris Cluster packages that support the Oracle Solaris 10 OS can be installed.


Note -  The install-cluster subcommand does not support installing Oracle Solaris Cluster version 3.3 or 3.3 5/11 software in a solaris or solaris10 brand zone-cluster nodes. Check the Oracle Solaris Cluster 4.3 Release Notes for more information on supported releases for solaris and solaris10 brand zone clusters.

Use this subcommand when the solaris or the solaris10 brand zones are installed with an Oracle Solaris system that does not have the cluster software installed.

To use this subcommand on a solaris10 branded zones, the Oracle Solaris OS software of an Oracle Solaris 10 system must be installed to the solaris10 zones using the clzonecluster install command, and the zones must be booted to an online state. If the cluster core packages are not yet installed on the solaris10 brand zones, you can install the core packages, the cluster software components, and the patches at the same time by specifying the –d option for the cluster release DVD directory, the –s option for the cluster software components, and the –p option for the patches. The options for installing cluster software components and patches are optional. If you have already installed the cluster core packages, you can still use this subcommand to install patches and any of the cluster software components that are supported in the zone cluster. When patching information is specified, the cluster nodes of the zone cluster must be booted into an offline-running state with the –o option.

To use this subcommand on a solaris branded zone, the Oracle Solaris OS software must be installed on the solaris zones, and the zone must be imported into a zone cluster configuration. The solaris zones must be booted to a running state before using this subcommand. If the cluster core packages are not yet installed in the solaris brand zone, you can install the core packages and the cluster software components at the same time by specifying the –s option with the appropriate arguments. The options for installing cluster software components are optional. If no option is specified, the cluster packages that are installed on the global zone will be installed to the zone cluster on all zone nodes. The options –d and –p are not valid for the solaris zones. If you have already installed the cluster core packages, you can still use this subcommand to install software components that are supported in the zone cluster.

A solaris10 brand zone cluster supports only the shared-IP zone type. For more information on exclusive-IP and shared-IP zone clusters, see the Oracle Solaris Cluster 4.3 Software Installation Guide .

This subcommand can be run only from the global zone.

list

Displays the names of configured zone clusters.

This subcommand reports the names of zone clusters that are configured in the cluster.

  • If you run the list subcommand from a global-cluster node, the subcommand displays a list of all the zone clusters in the global cluster.

  • If you run the list subcommand from a zone-cluster node, the subcommand displays only the name of the zone cluster.

To see the list of nodes where the zone cluster is configured, use the –v option.

move

Moves the zonepath to a new zonepath.

This subcommand moves the zonepath to a new zonepath.

You can use the move subcommand only from a global-cluster node.

ready

Prepares the zone for applications.

This subcommand prepares the zone for running applications.

You can use the ready subcommand only from a global-cluster node.

reboot

Reboots a zone cluster.

This subcommand reboots the zone cluster and is similar to issuing a halt subcommand, followed by a boot subcommand. See the halt subcommand and the boot subcommand for more information.

You can use the reboot subcommand only from a global-cluster node.

set

Sets values of properties specified with the –p option for a zone cluster. You can use the set subcommand from the global zone or from a zone cluster. See the description of –p in the OPTIONS section for information about the properties you can set.

show

Displays the properties of zone clusters.

Properties for a zone cluster include zone cluster name, brand, IP type, node list, zonepath, and allowed address. The show subcommand runs from a zone cluster but applies only to that particular zone cluster. The zonepath is always / when you use this subcommand from a zone cluster. If zone cluster name is specified, this command applies only for that zone cluster.

show-rev

Displays the cluster release information for each node of the zone cluster.

This feature is useful for listing the release version and patches installed in the zone cluster. For example:

# clzonecluster show-rev
=== Zone Clusters ===
Zone Cluster Name: zc1
Release at vznode1a on node pnode1:3.3u2_40u1_zc:2012-04-01
Release at vznode2a on node pnode2:3.3u2_40u1_zc:2012-04-01

You can use the show-rev subcommand from a global-cluster node or from a zone-cluster node.

status

Determines whether the zone-cluster node is a member of the zone cluster and displays if the zone cluster is a solaris, solaris10, or labeled brand.

The zone state can be one of the following: Configured, Installed, Ready, Running, Shutting Down, and Unavailable. The state of all the zone clusters in the global cluster is displayed so you can see the state of your virtual cluster.

To check zone activity, instead use the zoneadm command.

You can use the status subcommand only from a global-cluster node.

uninstall

Uninstalls a zone cluster.

This subcommand uninstalls a zone cluster. The uninstall subcommand uses the zoneadm command.

You can use the uninstall subcommand only from a global-cluster node.

verify

Checks that the syntax of the specified information is correct.

This subcommand invokes the zoneadm verify command on each node in the zone cluster to ensure that each zone cluster member can be installed safely. For more information, see zoneadm(1M).

You can use the verify subcommand only from a global-cluster node.

Options


Note -  The short and long form of each option are shown in this section.

The following options are supported:

–?
–-help

Displays help information.

You can specify this option with or without a subcommand.

If you do not specify a subcommand, the list of all available subcommands is displayed.

If you specify a subcommand, the usage for that subcommand is displayed.

If you specify this option and other options, the other options are ignored.

–a absolute_path_to_archivezoneclustername

Specifies the path to a flash_archive, cpio, pax, xus-tar, zfs archive, or a level 0 ufsdump of an installed Oracle Solaris 10 system, an installed Oracle Solaris 10 native zone, or a solaris10 branded zone. You can also specify the absolute path of the unified archive. For more information, see the following man pages: solaris10 (5) , flash_archive (4) , cpio (1) , and pax (1) .

–c config_profile.xml
–-configprofile config_profile.xml

Specifies a configuration profile template for a solaris brand zone cluster. After installation from the repository, the template applies the system configuration information to all nodes of the zone cluster. If config_profile.xml is not specified, you must manually configure each zone-cluster node by running from the global zone on each node the zlogin -C zoneclustername command. All profiles must have a .xml extension.

The –c option replaces the hostname of the zone-cluster node in the configuration profile template. The profile is applied to the zone-cluster node after booting the zone-cluster node.

The contents of the profile is a line-delimited list of commands to be specified to the interactive clzonecluster utility. See the Examples section of this man page for an example of the profile contents.

–d absolute_root_path
–-dirpath dirpatch

When the –d option is used with the cluster subcommand, it specifies the path to the root directory of an installed Oracle Solaris 10 system. The path should be accessible on all the physical nodes of the cluster where the zone cluster will be installed.

–d
–-dvd-directory dvd-directory

Specifies the DVD image directory.

When the –d option is used with the install-cluster subcommand, it specifies the DVD image directory for an Oracle Solaris Cluster release that supports solaris10 brand zones. The DVD image includes core packages and other cluster software components, such as agents, that are supported in the zone cluster and Geographic Edition software. The DVD directory must be accessible from the global zone of the node where you run the command.

–d
–-dry_run

When the –d option is used with the apply subcommand, the reconfiguration runs in a dry-run mode. The dry-run mode does not change the configuration and leaves the running zone intact. Use the dry-run mode to review actions that would be performed by the real reconfiguration.

–f{commandfile | zonepath}
–-file-argument {commandfile | zonepath}

When used with the configure subcommand, the –f option specifies the command file argument. For example, clzonecluster configure –f commandfile. When used with the move subcommand, the –f option specifies the zonepath.

–F

You can use the –F option during delete, and uninstall operations. The –F option forcefully suppresses the Are you sure you want to do this operation [y/n]? questions.

–m method
–-method method

Use the –m option to clone a zone cluster. The only valid method for cloning is the copy command. Before you run the clone subcommand, you must halt the source zone cluster.

–M manifest.xml
–-manifest manifest.xml

Use the –M option to specify a manifest for all nodes of a solaris brand zone cluster. The manifest specifies the Oracle Solaris package information and the Oracle Solaris Cluster package for a zone cluster installation.

–n nodename[…]
–-nodelist nodename[,…]

Specifies the node list for the subcommand.

For example, clzonecluster boot –n phys-schost-1, phys-schost-2 zoneclustername.

–o
–-offline

Boots or reboots a zone cluster into offline-running mode.

The offline-running mode occurs when the zone-cluster node is out of zone cluster membership but the Oracle Solaris zone state is running. Zone clusters share the boot mode (cluster or non-cluster mode) with the physical cluster, so being offline is different from the cluster being in non-cluster mode.

To boot the zone cluster into offline-running mode, type the following.

clzonecluster boot [-n phys-schost-1,…] [-o] zoneclustername

To reboot the zone cluster into offline-running mode, type the following.

clzonecluster reboot [-n phys-schost-1,…] [-o] zoneclustername

To boot an offline-running zone cluster back into online-running mode, run the clzonecluster reboot command without the –o option.

–p name=value
–-property=name=value
–-property name=value

The –p option is used with the install-cluster subcommand and the set subcommand. For information about usage of –p with the install-cluster subcommand, see the description for –p patchdir=patchdir[,patchlistfile =patchlistfile].

The –p option is used with the set subcommand to specify values of properties. Multiple instances of –p name=value are allowed.

Use this option with the set subcommand to modify the following properties:

resource_security

Specifies a security policy for execution of programs by RGM resources. Permissible values of resource_security are SECURE, WARN, OVERRIDE, or COMPATIBILITY.

Resource methods such as Start and Validate always run as root. If the method executable file has non-root ownership or group or world write permissions, an insecurity exists. In this case, if the resource_security property is set to SECURE, execution of the resource method fails at run time and an error is returned. If resource_security has any other setting, the resource method is allowed to execute with a warning message. For maximum security, set resource_security to SECURE.

The resource_security setting also modifies the behavior of resource types that declare the application_user resource property. For more information, see the application_user section of the r_properties(5) man page.

–p patchdir=patchdir[,patchlistfile=patchlistfile]
–-patch-specification=patchdir=patchdir[,patchlistfile=patchlistfile]
–-patch-specification patchdir=patchdir[,patchlistfile=patchlistfile]

The patchdir and patchlistfile properties specified by the –p option are used only with the install-cluster subcommand. If you install patches after the core packages have been installed, the zone cluster must be booted to an offline-running state in order to apply patches.

Multiple instances of –p name= value are allowed.

patchdir

Specifies the directory that contains Oracle Solaris Cluster patches that you want to apply to the solaris10 brand zone. The patchdir directory is required, and must be accessible from inside the solaris10 brand zone on all nodes of the zone cluster.

patchlistfile

Specifies the patchlistfile. The patchlistfile specifies a file containing the list of patches to install. If the optional patchlistfile is not specified, the command attempts to install all the patches inside the patchdir directory. You can also create a patchlistfile in the patchdir directory to list the patch IDs, one per line, to indicate the patches you want to install.

–s
–-software-component {all | software-component[,…]}

Specifies the software components to install.

For the solaris10 brand zones, these components are in addition to the core packages, and can be data services that are supported in zone clusters or Geographic Edition software. When you use -s all, no other components can be specified and all data services and Geographic Edition software are installed. For data service agents, the component name is the agent name. For Geographic Edition software, specify it as –s geo. If you do not specify the –s option, only cluster framework software is installed.

For the solaris brand zones, this option can be used to specify core group packages, data services group packages or Geographic Edition software. When you use -s all, no other components can be specified and all the data services, Geographic Edition software and all the core packages that are installed in the global zone are installed. If you do not specify the –s option, only the packages that are installed in the global zone are installed.

–v
–-verbose

Displays verbose information on the standard output (stdout).

–V
–-version

Displays the version of the command.

If you specify this option with other options, with subcommands, or with operands, they are all ignored. Only the version of the command is displayed. No other processing occurs.

[–x cert|ca-cert|key=file] …

If you have an HTTPS unified archive location, specify the SSL certificate, CA certificate, and key files. You can specify the –x option multiple times.

–Z target-zoneclustername
–-zonecluster target-zoneclustername

The zone cluster name that you want to clone.

Use the source zone-cluster name for cloning. The source zone cluster must be halted before you use this subcommand.

–z zone

If the unified archive contains multiple zones, specify the zone name of the source of the installation.

Resources and Properties

The clzonecluster command supports several resources and properties for zone clusters.

You must use the clzonecluster command to configure any resources and properties that are supported by the clzonecluster command. See the zonecfg (1M) man page for more information on configuring resources or properties that are not supported by the clzonecluster command.

The following subsections, Resources and Properties, describe those resources and properties that are supported by the clzonecluster command.

Resources

The following lists the resource types that are supported in the resource scope and where to find more information:

admin

For more information, see the zonecfg (1M) man page. This resource can be used in both the cluster scope and the node scope. This resource is passed down to the individual Oracle Solaris zone level. When the resource is specified in both cluster and node scope, the node scope resource information is passed down to the Oracle Solaris zone of the specific node of the zone cluster.

The auths property of the admin resource can be set to one of the following values:

clone

Equivalent to solaris.zone.clonefrom

login

Equivalent to solaris.zone.login

manage

Equivalent to solaris.zone.manage

capped-cpu

For more information, see the zonecfg (1M) man page. This resource can be used in both the cluster scope and the node scope. This resource is passed down to the individual Oracle Solaris zone level. When the resource is specified in both cluster and node scope, the node scope resource information is passed down to the Oracle Solaris zone of the specific node of the zone cluster.

capped-memory

For more information, see the zonecfg (1M) man page. This resource can be used in the cluster scope and the node scope. This resource is passed down to the individual Oracle Solaris zone level. When the resource is specified in both cluster and node scope, the node scope resource information is passed down to the Oracle Solaris zone of the specific node of the zone cluster.

dataset

For more information, see the zonecfg (1M) man page. This resource can be used in the cluster scope or the node scope. You cannot specify a data set in both cluster and node scope.

The resource in cluster scope is used to export a ZFS data set to be used in the zone cluster for a highly-available ZFS file system. The exported data set is managed by the Oracle Solaris Cluster software, and is not passed down to the individual Oracle Solaris zone level when specified in the cluster scope. A data set cannot be shared between zone clusters.

The resource in node scope is used to export a local ZFS dataset to a specific zone-cluster node. The exported data set is not managed by the Oracle Solaris Cluster software, and is passed down to the individual Oracle Solaris zone level when specified in the node scope.

dedicated-cpu

For more information, see the zonecfg (1M) man page. You can use a fixed number of CPUs that are dedicated to the zone cluster on each node.

This resource can be used in the cluster scope and the node scope. This resource is passed down to the individual Oracle Solaris zone level. When the resource is specified in both cluster and node scope, the node scope resource information is passed down to the Oracle Solaris zone of the specific node of the zone cluster.

device

For more information, see the zonecfg (1M) man page. This resource is passed down to the individual Oracle Solaris zone level and can be specified in the cluster scope or the node scope. The resource in the node scope is used to add devices specific to a zone-cluster node. You can add a device to only one zone cluster. You cannot add the same device to both the cluster scope and the node scope.

fs

For more information, see the zonecfg (1M) man page. You can specify this resource in the cluster scope or the node scope. You cannot specify the fs resource in both cluster and node scope.

The resource in cluster scope is generally used to export a file system to be used in the zone cluster. The exported file system is managed by the Oracle Solaris Cluster software, and is not passed down to the individual Oracle Solaris zone level, except for an lofs file system with the cluster-control property set tofalse. For more information about the cluster-control property, see the description for fs in the Resources section of this man page.

The resource in node scope is used to export a local file system to a specific zone cluster node. The exported file system is not managed by the Oracle Solaris Cluster software, and is passed down to the individual Oracle Solaris zone level when specified in the node scope.

You can export a file system to a zone cluster by using either a direct mount or a loopback mount. A direct mount makes the file system accessible inside the zone cluster by mounting the specified file system at a location that is under the root of the zone, or some subdirectory that has the zone root in its path. A direct mount means that the file system belongs exclusively to this zone cluster. When a zone cluster runs on Oracle Solaris Trusted Extensions, the use of direct mounts is mandatory for files mounted with both read and write privileges. Zone clusters support direct mounts for UFS, QFS standalone file system, QFS shared file system, and ZFS (exported as a data set).

A loopback mount is a mechanism for making a file system already mounted in one location appear to be mounted in another location. You can export a single file system to multiple zone clusters through the use of one loopback mount per zone cluster. This makes it possible to share a single file system between multiple zone clusters. The administrator must consider the security implications before sharing a file system between multiple zone clusters. Regardless of how the real file system is mounted, the loopback mount can restrict access to read-only.

fs: cluster-control

The cluster-control property applies only to loopback mounts specified in the cluster scope. The default value for the cluster-control property is true.

When the property value is true, Oracle Solaris Cluster manages this file system and does not pass the file system information to the zonecfg command. Oracle Solaris Cluster mounts and unmounts the file system in the zone-cluster node as needed after the zone boots.

Oracle Solaris Cluster can manage loopback mounts for QFS shared file systems, UFS, QFS standalone file systems, and PxFS on UFS.

When the property value is false, Oracle Solaris Cluster does not manage the file system. The cluster software passes this file system information and all associated information to the zonecfg command, which creates the zone-cluster zone on each machine. In this case, the Oracle Solaris software mounts the file system when the zone boots. The administrator can use this option with the UFS file system.

The administrator can specify a loopback mount in the cluster scope. Configuring the loopback mount with a cluster-control property value of false is useful for read-only mounts of common local directories (such as directories that contain executable files). This information is passed to the zonecfg command, which performs the actual mounts. Configuring the loopback mount with a cluster-control property value of true is useful for making the global file systems (PxFS)or shared QFS file systems available to a zone cluster that is under cluster control.

QFS shared file systems, UFS, QFS standalone file systems, and ZFS are configured in at most one zone cluster.

net

For more information about net resources, see the zonecfg (1M) man page.

Any net resource managed by Oracle Solaris Cluster, such as Logical Host or Shared Address, is specified in the cluster scope. Any net resource managed by an application, such as an Oracle RAC VIP, is specified in the cluster scope. These net resources are not passed to the individual Oracle Solaris zone level.

The administrator can specify the Network Interface Card (NIC) to use with the specified IP Address. The system automatically selects a NIC that satisfies the following two requirements:

  • The NIC already connects to the same subnet.

  • The NIC has been configured for this zone cluster.

node

The node resource performs the following two purposes:

  • Identifies a scope level. Any resource specified in a node scope belongs exclusively to this specific node.

  • Identifies a node of the zone cluster. The administrator identifies the machine where the zone will run by identifying the global cluster global zone on that machine. Specifying an IP address and NIC for each zone-cluster node is optional. The administrator also specifies information identifying network information for reaching this node.


Note -  If the administrator does not configure an IP address for each zone-cluster node, two things will occur:
  1. That specific zone cluster will not be able to configure NAS devices for use in the zone cluster. The cluster uses the IP address of the zone-cluster node when communicating with the NAS device, so not having an IP address prevents cluster support for fencing NAS devices.

  2. The cluster software will activate any Logical Host IP address on any NIC.


privnet

This resource can be used in the node scope. This resource specifies the data link device that can be used as the private adapter of the zone cluster. The resource must be available in the global zone before it is assigned to the zone cluster. When the exclusive-IP zone cluster is configured, the enable_priv_net property is set to true by default to enable private network communication between the nodes of the zone cluster.

add node
add privnet
set physical=vnic1
end
add privnet
set physical=vnic5
end
end

The ordering of the resource property privnet is used to form paths between zone cluster nodes. The first privnet adapter specified in the first node will try to form a path with the first privnet path specified in the second node. The ordering of the privnet resource is preserved across add and delete operations.


Note -  The privnet resource cannot be shared among multiple exclusive-IP zones. You must assign it to a specific exclusive-IP zone.
rctl

For more information, see the zonecfg (1M) man page. This resource can be used in both the cluster scope and the node scope. This resource is passed down to the individual Oracle Solaris zone level. When the resource is specified in both cluster and node scope, the node scope resource information is passed down to the Oracle Solaris zone of the specific node of the zone cluster.

sysid

See the sysidcfg (4) man page. This resource specifies the system identification parameters for all zones of the solaris10 zone cluster.

Properties

Each resource type has one or more properties. The following properties are supported for cluster:

(cluster)

admin

For more information, see the zonecfg (1M) man page.

(cluster)

allowed-address

Specifies the IP addresses that can be plumbed on the adapter. Only specific IP addresses are allowed. This optional property is used for the node scope net resource. For example:

set allowed-address=1.2.2.3/24

For more information, see the zonecfg (1M) man page.

(cluster)

attr

For more information, see the zonecfg (1M) man page. The zone cluster will use the property name set to cluster, property type set to boolean , and property value set to true. These properties are set by default when the zone cluster is configured with the create option. These properties are mandatory for a zone cluster configuration and cannot be changed.

(cluster)

autoboot

For more information, see the zonecfg (1M) man page.

(cluster)

bootargs

For more information, see the zonecfg (1M) man page.

(cluster)

brand

For more information, see the zonecfg (1M) man page. The solaris, solaris10, and labeled brands are the only brand types supported.

(cluster)

cpu-shares

For more information, see the zonecfg (1M) man page.

(cluster)

device

zonecfg (1M) .

(cluster)

enable_priv_net

When set to true, Oracle Solaris Cluster private network communication is enabled between the nodes of the zone cluster.

  • If ip-type is set to shared, communication between zone-cluster nodes uses the private networks of the global cluster.

  • If ip-type is set to exclusive, communication between zone-cluster nodes uses the specified privnet resources. You need privnet resources for exclusive-IP zone-cluster configurations, except when the enable_priv_net property is set to false.

The Oracle Solaris Cluster private hostnames and IP addresses for the zone cluster nodes are automatically generated by the system. Private network is disabled if the value is set to false. The default value is true.


Note -  You cannot change the values of the enable_priv_net property after the zone cluster has been created.
(cluster)

ip-type

For more information, see the zonecfg (1M) man page. shared and exclusive are the only values supported.

(cluster)

limitpriv

For more information, see the zonecfg (1M) man page.

(cluster)

max-lwps

For more information, see the zonecfg (1M) man page.

(cluster)

max-msg-ids

For more information, see the zonecfg (1M) man page.

(cluster)

max-sem-ids

For more information, see the zonecfg (1M) man page.

(cluster)

max-shm-ids

For more information, see the zonecfg (1M) man page.

(cluster)

monitor_quantum

Specifies how often to send the monitoring message in milliseconds to monitor the private connect for exclusive-IP zone cluster. The default monitoring quantum value, which is also the minimum value, is 1,000 milliseconds.

(cluster)

monitor_timeout

Specifies the time interval in milliseonds that is used to monitor the private connect for exclusive-IP zone cluster. If no monitoring messages are received from the peer zone nodes after this timeout value, the corresponding path is declared as down. The default timeout value is 20,000 milliseconds. This value cannot be reduced to less than 10,000 milliseconds. The value that you specify for monitor_timeout must always be greater than or equal to five times the value that you specify for monitor quantum.

(cluster)

max-shm-memory

For more information, see the zonecfg (1M) man page.

(cluster)

pool

For more information, see the zonecfg (1M) man page.

(cluster)

zonename

The name of the zone cluster, as well as the name of each zone in the zone cluster.

(cluster)

zonepath

The zonepath of each zone in the zone cluster.

admin

For more information, see the zonecfg (1M) man page.

capped-cpu

For more information, see the zonecfg (1M) man page.

capped-memory

For more information, see the zonecfg (1M) man page.

dataset

For more information, see the zonecfg (1M) man page.

dedicated-cpu

For more information, see the zonecfg (1M) man page.

device

For more information, see the zonecfg (1M) man page.

fs

For more information, see the zonecfg (1M) man page.

inherit pkg-dir

For more information, see the zonecfg (1M) man page.

net

For more information, see the zonecfg (1M) man page.

node

Includes physical-host, hostname, and net.

  • physical-host - This property specifies a global cluster node that will host a zone-cluster node.

  • hostname - This property specifies the public host name of the zone-cluster node on the global cluster node specified by the physical-host property.

  • net - This resource specifies a network address and physical interface name for public network communication by the zone-cluster node on the global cluster node specified by physical-host.

rctl

See zonecfg (1M) .

sysid

Use the /usr/bin/sysconfig configure command. See sysidcfg (4) . Includes root_password, name_service, security_policy, system_locale, timezone, terminal, and nfs4_domain. The administrator can later manually change any sysidcfg config value following the normal Oracle Solaris procedures one node at a time.

  • root_password - This property specifies the encrypted value of the common root password for all nodes of the zone cluster. Do not specify a clear text password. Encrypted password string from /etc/shadow must be used. This is a required property.

  • name_service - This optional property specifies the naming service to be used in the zone cluster. However, the settings in the global zone's /etc/sysidcfg file might be stale. To ensure that this property has the correct setting, enter the value manually by using the clzonecluster command.

  • security_policy - The value is set to none by default.

  • system_locale - The value is obtained from the environment of the clzonecluster command by default.

  • timezone - This property specifies the time zone to be used in the zone cluster. The value by default is obtained from the environment of the clzonecluster command.

  • terminal - The value is set to xterm by default.

  • nfs4_domain - The value is set to dynamic by default.

Exit Status

The complete set of exit status codes for all commands in this command set are listed on the Intro(1CL) man page.

If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command processes the next operand in the operand list. The returned exit code always reflects the error that occurred first.

This command returns the following exit status codes:

0 CL_NOERR

No error

The command that you issued completed successfully.

1 CL_ENOMEM

Not enough swap space

A cluster node ran out of swap memory or ran out of other operating system resources.

3 CL_EINVAL

Invalid argument

You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the –i option was incorrect.

6 CL_EACCESS

Permission denied

The object that you specified is inaccessible. You might need superuser or RBAC access to issue the command. See the su(1M) and rbac(5) man pages for more information.

18 CL_EINTERNAL

Internal error was encountered

An internal error indicates a software defect or other defect.

35 CL_EIO

I/O error

A physical input/output error has occurred.

36 CL_ENOENT

No such object

The object that you specified cannot be found for one of the following reasons: (1) The object does not exist. (2) A directory in the path to the configuration file that you attempted to create with the –o option does not exist. (3)The configuration file that you attempted to access with the –i option contains errors.

38 CL_EBUSY

Object busy

You attempted to remove a cable from the last cluster interconnect path to an active cluster node. Or, you attempted to remove a node from a cluster configuration from which you have not removed references.

39 CL_EEXIST

Object exists

The device, device group, cluster interconnect component, node, cluster, resource, resource type, resource group, or private string that you specified already exists.

41 CL_ETYPE

Invalid type

The type that you specified with the –t or –p option does not exist.

Examples

Example 1 Configuration File to Create a Zone Cluster

The following example shows the contents of a command file, sczone-config, that can be used with the clzonecluster utility to create a zone cluster. The file contains the series of clzonecluster commands that you would input manually.

In the following configuration, the zone cluster sczone is created on the global-cluster node phys-schost-1. The zone cluster uses /zones/sczone as the zone path and the public IP address 172.16.2.2. The first node of the zone cluster is assigned the hostname zc-host-1 and uses the network address 172.16.0.1 and the net0 adapter. The second node of the zone cluster is created on the global-cluster node phys-schost-2. This second zone-cluster node is assigned the hostname zc-host-2 and uses the network address 172.16.0.2 and the net1 adapter.

create
set zonepath=/zones/sczone
add net
set address=172.16.2.2
end
add node
set physical-host=phys-schost-1
set hostname=zc-host-1
add net
set address=172.16.0.1
set physical=net0
end
end
add node
set physical-host=phys-schost-2
set hostname=zc-host-2
add net
set address=172.16.0.2
set physical=net1
end
end
commit
exit
Example 2 Creating a Zone Cluster by Using a Configuration File

The following example shows the commands to create the new zone cluster sczone on the global-cluster node phys-schost-1 by using the configuration file sczone-config. The hostnames of the zone-cluster nodes are zc-host-1 and zc-host-2.

phys-schost-1# clzonecluster configure -f sczone-config sczone

phys-schost-1# clzonecluster verify sczone
phys-schost-1# clzonecluster install sczone
Waiting for zone install commands to complete on all the nodes of the 
zone cluster "sczone"...
phys-schost-1# clzonecluster boot sczone
Waiting for zone boot commands to complete on all the nodes of the 
zone cluster "sczone"...
phys-schost-1# clzonecluster status sczone
=== Zone Clusters ===

--- Zone Cluster Status ---

Name      Node Name        Zone HostName    Status    Zone Status
----      ---------        -------------    ------    -----------
sczone    phys-schost-1    zc-host-1        Offline   Running
          phys-schost-2    zc-host-2        Offline   Running

In all the examples below, the zoneclustername is sczone. The first global-cluster node is phys-schost-1 and the second node is phys-schost-2. The first zone-cluster node is zc-host-1 and the second one is zc-host-2.

Example 3 Creating a New Zone Cluster

The following example demonstrates how to create a two-node solaris10 brand zone cluster. A zpool "tank" is delegated to the zone to be used as a highly-available ZFS file system. Memory capping is used to limit the amount of memory that can be used in the zone cluster. Default system identification values are used, except for the root password.

phys-schost-1# clzonecluster configure sczone
sczone: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:sczone> create -b
clzc:sczone> set zonepath=/zones/timuzc
clzc:sczone> set brand=solaris10
clzc:sczone> set autoboot=true
clzc:sczone> set bootargs="-m verbose"
clzc:sczone> set limitpriv="default,proc_priocntl,proc_clock_highres"

clzc:sczone> set enable_priv_net=true
clzc:sczone> set ip-type=shared
clzc:sczone> add dataset
clzc:sczone:dataset> set name=tank
clzc:sczone:dataset> end
clzc:sczone> add capped-memory
clzc:sczone:capped-memory> set physical=3G
clzc:sczone:capped-memory> end
clzc:sczone> add rctl
clzc:sczone:rctl> set name=zone.max-swap
clzc:sczone:rctl> add value (priv=privileged,limit=4294967296,action=deny)

clzc:sczone:rctl> end
clzc:sczone> add rctl
clzc:sczone:rctl> set name=zone.max-locked-memory
clzc:sczone:rctl> add value (priv=privileged,limit=3221225472,action=deny)

clzc:sczone:rctl> end
clzc:sczone> add attr
clzc:sczone:attr> set name=cluster
clzc:sczone:attr> set type=boolean
clzc:sczone:attr> set value=true
clzc:sczone:attr> end
clzc:sczone> add node
clzc:sczone:node> set physical-host=ptimu1
clzc:sczone:node> set hostname=zc-host-1
clzc:sczone:node> add net
clzc:sczone:node:net> set address=vztimu1a
clzc:sczone:node:net> set physical=sc_ipmp0
clzc:sczone:node:net> end
clzc:sczone:node> end
clzc:sczone> add node
clzc:sczone:node> set physical-host=ptimu2
clzc:sczone:node> set hostname=zc-host-2
clzc:sczone:node> add net
clzc:sczone:node:net> set address=vztimu2a
clzc:sczone:node:net> set physical=sc_ipmp0
clzc:sczone:node:net> end
clzc:sczone:node> end
clzc:sczone> add fs
clzc:sczone:fs> set dir=/opt/local
clzc:sczone:fs> set special=/usr/local
clzc:sczone:fs> set type=lofs
clzc:sczone:fs> add options [ro,nodevices]
clzc:sczone:fs> set cluster-control=false
clzc:sczone:fs> end
clzc:sczone> add sysid
clzc:sczone> set root_password=ZiitH.NOLOrRg
clzc:sczone> set name_service="NIS{domain_name=mycompany.com name_server=
 ns101c-90(10.100.10.10)}"
clzc:sczone> set nfs4_domain=dynamic
clzc:sczone> set security_policy=NONE
clzc:sczone> set system_locale=C
clzc:sczone> set terminal=xterms
clzc:sczone> set timezone=US/Pacific
clzc:sczone> end

If you were to use the create subcommand (rather than the create -b subcommand shown above), the default template would be used and it already has the attr properties set.

The zone cluster is now configured. The following commands install and then boot the zone cluster from a global-cluster node:

phys-schost-1# clzonecluster install -a absolute_path_to_archive install sczone
phys-schost-1# clzonecluster boot sczone
Example 4 Creating a Zone Cluster from a Unified Archive

The following example demonstrates how to create and install a zone cluster from a unified archive. The unified archive can be created from a global zone, non-global zone, or zone cluster node. Both clone archives and recovery archives are supported for configuring and installing zone clusters from unified archives. If the unified archive is created from a non-clustered zone, you must set the following property: enable_priv_net=true. You should also change any zone property as needed.

phys-schost-1# clzonecluster configure sczone
sczone: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:sczone> create -a absolute_path_to_archive -z archived_zone_1
clzc:sczone> set zonepath=/zones/sczone

clzc:sczone> set enable_priv_net=true
clzc:sczone> set ip-type=shared

clzc:sczone> add attr
clzc:sczone:attr> set name=cluster
clzc:sczone:attr> set type=boolean
clzc:sczone:attr> set value=true
clzc:sczone:attr> end

clzc:sczone> add node
clzc:sczone:node> set physical-host=psoft1
clzc:sczone:node> set hostname=zc-host-1
clzc:sczone:node> add net
clzc:sczone:node:net> set address=vzsoft1a
clzc:sczone:node:net> set physical=sc_ipmp0
clzc:sczone:node:net> end
clzc:sczone:node> end
clzc:sczone> add node
clzc:sczone:node> set physical-host=psoft2
clzc:sczone:node> set hostname=zc-host-2
clzc:sczone:node> add net
clzc:sczone:node:net> set address=vzsoft2a
clzc:sczone:node:net> set physical=sc_ipmp0
clzc:sczone:node:net> end
clzc:sczone:node> end

The zone cluster is now configured. The following command installs the zone cluster from a unified archive on a global-cluster node:

phys-schost-1# clzonecluster install -a absolute_path_to_archive -z archived-zone sczone

The zone cluster is now installed. The following command boots the zone cluster:

phys-schost-1# clzonecluster boot sczone
Example 5 Modifying an Existing Zone Cluster

The following example shows how to modify the configuration of the zone cluster created in Example 1. An additional public IP address is added to the zone-cluster node on phys-schost-2.

A UFS file system is exported to the zone cluster for use as a highly-available file system. It is assumed that the UFS file system is created on an Oracle Solaris Volume Manager metadevice.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add device
clzc:sczone:device> set match=/dev/md/1/dsk/d100
clzc:sczone:device> end
clzc:sczone> add device
clzc:sczone:device> set match=/dev/md/oraset/dsk/d100
clzc:sczone:device> end
clzc:sczone> select node physical-host=phys-schost-2
clzc:sczone:node> add net
clzc:sczone:node:net> set address=192.168.0.3/24
clzc:sczone:node:net> set physical=bge0
clzc:sczone:node:net> end
clzc:sczone:node> end
clzc:sczone> add fs
clzc:sczone:fs> set dir=/qfs/ora_home
clzc:sczone:fs> set special=oracle_home
clzc:sczone:fs> set type=samfs
clzc:sczone:fs> end
clzc:sczone> exit
Example 6 Creating a New Zone Cluster Using an Existing Zone Cluster as a Template

The following example shows how to create a zone cluster called sczone1, using the sczone zone cluster created in Example 1 as a template. The new zone cluster's configuration will be the same as the original zone cluster. Some properties of the new zone cluster need to be modified to avoid conflicts. When the administrator removes a resource type without specifying a specific resource, the system removes all resources of that type. For example, remove net causes the removal of all net resources.

phys-schost-1# clzonecluster configure sczone1
sczone1: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.

clzc:sczone1> create -t sczone
clzc:sczone1>set zonepath=/zones/sczone1

clzc:sczone1> select node physical-host=phys-schost-1
clzc:sczone1:node> set hostname=zc-host-3
clzc:sczone1:node> select net address=zc-host-1
clzc:sczone1:node:net> set address=zc-host-3
clzc:sczone1:node:net> end
clzc:sczone1:node> end
clzc:sczone1> select node physical-host=phys-schost-2
clzc:sczone1:node> set hostname=zc-host-4
clzc:sczone1:node> select net address=zc-host-2
clzc:sczone1:node:net> set address=zc-host-4
clzc:sczone1:node:net> end
clzc:sczone1:node> remove net address=192.168.0.3/24
clzc:sczone1:node> end
clzc:sczone1> remove dataset name=tank/home
clzc:sczone1> remove net
clzc:sczone1> remove device
clzc:sczone1> remove fs dir=/qfs/ora_home
clzc:sczone1> exit

Operands

The following operands are supported:

zoneclustername

The name of the zone cluster. You specify the name of the new zone cluster. The zoneclustername operand is supported for all subcommands.

+

All nodes in the cluster. The + operand is supported only for a subset of subcommands.

Exit Status

The complete set of exit status codes for all commands in this command set are listed on the Intro(1CL) man page.

If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command processes the next operand in the operand list. The returned exit code always reflects the error that occurred first.

This command returns the following exit status codes:

0 CL_NOERR

No error.

The command that you issued completed successfully.

1 CL_ENOMEM

Not enough swap space.

A cluster node ran out of swap memory or ran out of other operating system resources.

3 CL_EINVAL

Invalid argument.

You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the –i option was incorrect.

18 CL_EINTERNAL

Internal error was encountered.

36 CL_ENOENT

No such object

The object that you specified cannot be found for one of the following reasons: (1) The object does not exist. (2) A directory in the path to the configuration file that you attempted to create with the –o option does not exist. (3)The configuration file that you attempted to access with the –i option contains errors.

37 CL_EOP

Operation not allowed

You tried to perform an operation on an unsupported configuration, or you performed an unsupported operation.

Attributes

See attributes(5) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
ha-cluster/system/core
Interface Stability
Evolving

See Also

clnode(1CL), cluster(1CL), Intro(1CL), scinstall(1M), zoneadm (1M) , zonecfg (1M) , clconfiguration(5CL)

Notes

The superuser can run all forms of this command.

All users can run this command with the –? (help) or –V (version) option.

To run the clzonecluster command with subcommands, users other than superuser require RBAC authorizations. See the following table.

Subcommand
RBAC Authorization
boot
solaris.cluster.admin
check
solaris.cluster.read
clone
solaris.cluster.admin
configure
solaris.cluster.admin
delete
solaris.cluster.admin
export
solaris.cluster.admin
halt
solaris.cluster.admin
install
solaris.cluster.admin
list
solaris.cluster.read
monitor
solaris.cluster.modify
move
solaris.cluster.admin
ready
solaris.cluster.admin
reboot
solaris.cluster.admin
show
solaris.cluster.read
status
solaris.cluster.read
uninstall
solaris.cluster.admin
unmonitor
solaris.cluster.modify
verify
solaris.cluster.admin