Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Reference Manual Oracle Solaris Cluster 4.0 |
- create and manage zone clusters
/usr/cluster/bin/clzonecluster [subcommand] -?
/usr/cluster/bin/clzonecluster -V
/usr/cluster/bin/clzonecluster subcommand [options] -v [zoneclustername]
/usr/cluster/bin/clzonecluster boot [-n nodename[,...]] {+ | zoneclustername [...]}
/usr/cluster/bin/clzonecluster clone -Z source-zoneclustername [-m method] [-n nodename[,...]] { zoneclustername }
/usr/cluster/bin/clzonecluster configure [-f commandfile] zoneclustername
/usr/cluster/bin/clzonecluster delete [-F] zoneclustername
/usr/cluster/bin/clzonecluster halt [-n nodename[,...]] {+ | zoneclustername}
/usr/cluster/bin/clzonecluster install [-M manifest] zoneclustername
/usr/cluster/bin/clzonecluster install [-n nodename[,...]] zoneclustername
/usr/cluster/bin/clzonecluster list [+ | zoneclustername [...]]
/usr/cluster/bin/clzonecluster move -f zonepath zoneclustername
/usr/cluster/bin/clzonecluster ready [-n nodename[,...]] {+ | zoneclustername [...]}
/usr/cluster/bin/clzonecluster reboot [-n nodename[,...]] {+ | zoneclustername [...]}
/usr/cluster/bin/clzonecluster show [+ | zoneclustername [...]]
/usr/cluster/bin/clzonecluster status [+ | zoneclustername [...]]
/usr/cluster/bin/clzonecluster uninstall [-F] [-n nodename[,...]] zoneclustername
/usr/cluster/bin/clzonecluster verify [-n nodename[,...]] {+ | zoneclustername [...]}
The clzonecluster command creates and modifies zone clusters for Oracle Solaris Cluster configurations. The clzc command is the short form of the clzonecluster command; the commands are identical. The clzonecluster command is cluster-aware and supports a single source of administration. You can issue all forms of the command from one node to affect a single zone-cluster node or all nodes.
You can omit subcommand only if options is the -? option or the -V option.
The subcommands require at least one operand, except for the list, show, and status subcommands. However, many subcommands accept the plus sign operand (+) to apply the subcommand to all applicable objects. The clzonecluster commands can be run on any node of a zone cluster and can affect any or all of the zone cluster.
Each option has a long and a short form. Both forms of each option are given with the description of the option in OPTIONS.
The following subcommands are supported:
Boots the zone cluster.
The boot subcommand boots the zone cluster. The boot subcommand uses the -n flag to boot the zone cluster for a specified list of nodes. You can use the boot subcommand only from a global-cluster node.
Clones the zone cluster.
The clone command clones the zone cluster. You can use the clone subcommand only from a global-cluster node.
Launches an interactive utility to configure a zone cluster.
The configure subcommand uses the zonecfg command to configure a zone on each specified machine. The configure subcommand lets you specify properties that apply to each node of the zone cluster. These properties have the same meaning as established by the zonecfg command for individual zones. The configure subcommand supports the configuration of properties that are unknown to the zonecfg command.
The configure subcommand launches an interactive shell if you do not specify the -f option. The -f option takes a command file as its argument. The configure subcommand uses this file to create or modify zone clusters non-interactively. You can use the configure subcommand only from a global-cluster node.
Both the interactive and non-interactive forms of the configure command support several subcommands to edit the zone cluster configuration. See zonecfg(1M) for a list of available configuration subcommands.
The interactive configure utility enables you to create and modify the configuration of a zone cluster. Zone-cluster configuration consists of a number of resource types and properties. The configure utility uses the concept of scope to determine where the subcommand applies. There are three levels of scope that are used by the configure utility: cluster, resource, and node-specific resource. The default scope is cluster. The following list describes the three levels of scope:
Cluster scope – Properties that affect the entire zone cluster. If the zoneclustername is sczone, the interactive shell of the clzonecluster command looks similar to the following:
clzc:sczone>
Node scope – A special resource scope that is nested inside the node resource scope. Settings inside the node scope affect a specific node in the zone cluster. For example, you can add a net resource to a specific node in the zone cluster. The interactive shell of the clzonecluster command looks similar to the following:
clzc:sczone:node:net>
Resource scope – Properties that apply to one specific resource. A resource scope prompt has the name of the resource type appended. For example, the interactive shell of the clzonecluster command looks similar to the following:
clzc:sczone:net>
Removes a specific zone cluster.
This subcommand deletes a specific zone cluster. When you use a wild card operand (*), the delete command removes the zone clusters that are configured on the global cluster. The zone cluster must be in the configured state before you run the delete subcommand. You can use the delete subcommand only from a global-cluster node.
Stops a zone cluster or a specific node on the zone cluster.
When you specify a specific zone cluster, the halt subcommand applies only to that specific zone cluster. You can halt the entire zone cluster or just halt specific nodes of a zone cluster. If you do not specify a zone cluster, the halt subcommand applies to all zone clusters. You can also halt all zone clusters on specified machines.
The halt subcommand uses the -n option to halt zone clusters on specific nodes. By default, the halt subcommand stops all zone clusters on all nodes. If you specify the + operand in place of a zone name, all the zone clusters are stopped. You can use the halt subcommand only from a global-cluster node.
Installs a zone cluster.
This subcommand installs a zone cluster. You can use the install subcommand only from a global-cluster node.
If you use the install -M manifest option, the manifest you specify is used for installation on all nodes of the zone cluster. For more information about the Automated Installer manifest, see Installing Oracle Solaris 11 Systems. A manifest file describes solaris package information that the administrator requires for installation, such as the certificate_file, key_file, publisher, and any additional packages. The manifest.xml file must also specify the Oracle Solaris Cluster group package ha-cluster-full, ha-cluster-framework-full, ha-cluster-data-services-full, or ha-cluster-minimal for a zone cluster installation.
If you do not use the -M option (which is the default), the Automated Installer manifest at /usr/share/auto_install/manifest/zone_default.xml is used for the installation. The default ha-cluster-full group package is installed by the Automated Installer. If you use a custom manifest when installing the zone cluster and do not specify an Oracle Solaris Cluster group package, the installation fails.
Displays the names of configured zone clusters.
This subcommand reports the names of zone clusters that are configured in the cluster.
If you run the list subcommand from a global-cluster node, the subcommand displays a list of all the zone clusters in the global cluster.
If you run the list subcommand from a zone-cluster node, the subcommand displays only the name of the zone cluster.
To see the list of nodes where the zone cluster is configured, use the -v option.
Moves the zonepath to a new zonepath.
This subcommand moves the zonepath to a new zonepath. You can use the move subcommand only from a global-cluster node.
Prepares the zone for applications.
This subcommand prepares the zone for running applications. You can use the ready subcommand only from a global-cluster node.
Reboots a zone cluster.
This subcommand reboots the zone cluster and is similar to issuing a halt subcommand, followed by a boot subcommand. See the halt subcommand and the boot subcommand for more information. You can use the reboot subcommand only from a global-cluster node.
Displays the properties of zone clusters.
Properties for a zone cluster include zone cluster name, brand, IP type, node list, and zonepath. The show subcommand runs from a zone cluster but applies only to that particular zone cluster. The zonepath is always / when you use this subcommand from a zone cluster. If zone cluster name is specified, this command applies only for that zone cluster. You can use the show subcommand only from a global-cluster node.
Determines whether the zone cluster node is a member of the zone cluster.
The zone state can be one of the following: Configured, Installed, Ready, Running, and Shutting Down. The state of all the zone clusters in the global cluster is displayed so you can see the state of your virtual cluster. You can use the status subcommand only from a global-cluster node.
To check zone activity, instead use the zoneadm command.
Uninstalls a zone cluster.
This subcommand uninstalls a zone cluster. The uninstall subcommand uses the zoneadm command. You can use the uninstall subcommand only from a global-cluster node.
Checks that the syntax of the specified information is correct.
This subcommand invokes the zoneadm verify command on each node in the zone cluster to ensure that each zone cluster member can be installed safely. For more information, see zoneadm(1M). You can use the verify subcommand only from a global-cluster node.
Note - The short and long form of each option are shown in this section.
The following options are supported:
Displays help information.
You can specify this option with or without a subcommand.
If you do not specify a subcommand, the list of all available subcommands is displayed.
If you specify a subcommand, the usage for that subcommand is displayed.
If you specify this option and other options, the other options are ignored.
When used with the configure subcommand, the -f option specifies the command file argument. For example, clzonecluster configure -f commandfile. When used with the move subcommand, the -f option specifies the zonepath.
You can use the -F option during delete and uninstall operations. The -F option forcefully suppresses the Are you sure you want to do this operation [y/n]? questions.
Use the method option to clone a zone cluster. The only valid method for cloning is the copy command. Before you run the clone subcommand, you must halt the source zone cluster.
Use the -M option to specify a manifest for all nodes of a zone cluster. The manifest specifies the Oracle Solaris package information and the Oracle Solaris Cluster package for a zone cluster installation.
Specifies the node list for the subcommand.
For example, clzonecluster boot -n phys-schost-1, phys-schost-2 zoneclustername.
Displays verbose information on the standard output (stdout) .
Displays the version of the command.
If you specify this option with other options, with subcommands, or with operands, they are all ignored. Only the version of the command is displayed. No other processing occurs.
The zone cluster name that you want to clone.
Use the source zone-cluster name for cloning. The source zone cluster must be halted before you use this subcommand.
The clzonecluster command supports several resources and properties for zone clusters.
The following lists the resource types that are supported in the resource scope and where to find more information:
See zonecfg(1M).
See zonecfg(1M).
See zonecfg(1M). Use this resource to export a ZFS data set to be used in the zone cluster for a highly-available ZFS file system. The exported data set is managed by the Oracle Solaris Cluster software, and is not passed down to the individual Oracle Solaris zone level when specified in the cluster scope. A data set cannot be shared between zone clusters.
See zonecfg(1M). You can use a fixed number of CPUs that are dedicated to the zone cluster on each node.
See zonecfg(1M). You can add a device to only one zone cluster.
See zonecfg(1M). Use this resource to export a file system to be used in the zone cluster. A file system can be exported to a zone cluster using either a direct mount or a loopback mount.
A direct mount makes the file system accessible inside the zone cluster by mounting the specified file system at a location that is under the root of the zone, or some subdirectory that has the zone root in its path. A direct mount means that the file system belongs exclusively to this zone cluster.
Zone clusters support direct mounts for UFS, QFS standalone file system, QFS shared file system, and ZFS (exported as a data set).
A loopback mount is a mechanism for making a file system already mounted in one location appear to be mounted in another location. You can export a single file system to multiple zone clusters through the use of one loopback mount per zone cluster. This makes it possible to share a single file system between multiple zone clusters. The administrator must consider the security implications before sharing a file system between multiple zone clusters. Regardless of how the real file system is mounted, the loopback mount can restrict access to read-only.
The cluster-control applies only to loopback mounts. The default value for the cluster-control property is true.
When the property value is true, Oracle Solaris Cluster manages this file system and does not pass the file system information to the zonecfg command. Oracle Solaris Cluster mounts and unmounts the file system in the zone cluster node as needed after the zone boots.
Oracle Solaris Cluster can manage loopback mounts for QFS shared file systems, UFS, QFS standalone file systems, and PxFS on UFS.
When the property value is false, Oracle Solaris Cluster does not manage the file system. The cluster software passes this file system information and all associated information to the zonecfg command, which creates the zone cluster zone on each machine. In this case, the Oracle Solaris software mounts the file system when the zone boots. The administrator can use this option with the UFS file system.
The administrator can specify a loopback mount in the cluster scope. Configuring the loopback mount with a cluster-control property value of false is useful for read-only mounts of common local directories (such as directories that contain executable files). This information is passed to the zonecfg command, which performs the actual mounts. Configuring the loopback mount with a cluster-control property value of true is useful for making the global file systems (PxFS) or shared QFS file systems available to a zone cluster that is under cluster control.
QFS shared file sytems, UFS, QFS standalone file systems, and ZFS are configured in at most one zone cluster.
See zonecfg(1M) for information about net resources.
Any net resource managed by Oracle Solaris Cluster, such as Logical Host or Shared Address, is specified in the cluster scope. Any net resource managed by an application, such as an Oracle RAC VIP, is specified in the cluster scope. These net resources are not passed to the individual Oracle Solaris zone level.
The administrator can specify the Network Interface Card (NIC) to use with the specified IP Address. The system automatically selects a NIC that satisfies the following two requirements:
The NIC already connects to the same subnet.
The NIC has been configured for this zone cluster.
The node resource performs the following two purposes:
Identifies a scope level. Any resource specified in a node scope belongs exclusively to this specific node.
Identifies a node of the zone cluster. The administrator identifies the machine where the zone will run by identifying the global cluster global zone on that machine. Specifying an IP address and NIC for each zone cluster node is optional. The administrator also specifies information identifying network information for reaching this node.
Note - If the administrator does not configure an IP address for each zone cluster node, two things will occur:
That specific zone cluster will not be able to configure NAS devices for use in the zone cluster. The cluster uses the IP address of the zone cluster node when communicating with the NAS device, so not having an IP address prevents cluster support for fencing NAS devices.
The cluster software will activate any Logical Host IP address on any NIC.
See zonecfg(1M).
Each resource type has one or more properties. The following properties are supported for cluster:
attr
See zonecfg(1M). The zone cluster will use the property name set to cluster, property type set to boolean, and property value set to true. These properties are set by default when the zone cluster is configured with the create option. These properties are mandatory for a zone cluster configuration and cannot be changed.
zonename
The name of the zone cluster, as well as the name of each zone in the zone cluster.
zonepath
The zonepath of each zone in the zone cluster.
autoboot
See zonecfg(1M).
bootargs
See zonecfg(1M).
limitpriv
See zonecfg(1M).
brand
See zonecfg(1M). solaris is the only brand type supported.
ip-type
See zonecfg(1M). shared is the only value supported.
pool
See zonecfg(1M).
cpu-shares
See zonecfg(1M).
max-lwps
See zonecfg(1M).
max-msg-ids
See zonecfg(1M).
max-sem-ids
See zonecfg(1M).
max-shm-ids
See zonecfg(1M).
max-shm-memory
See zonecfg(1M).
enable_priv_net
When set to true, Oracle Solaris Cluster private network communication is enabled between the nodes of the zone cluster. The Oracle Solaris Cluster private hostnames and IP addresses for the zone cluster nodes are automatically generated by the system. Private network is disabled if the value is set to false. The default value is true.
See zonecfg(1M).
See zonecfg(1M).
See zonecfg(1M).
See zonecfg(1M).
See zonecfg(1M).
See zonecfg(1M).
See zonecfg(1M).
See zonecfg(1M).
See zonecfg(1M).
Includes physical-host, hostname, and net.
physical-host – This property specifies a global cluster node that will host a zone cluster node.
hostname – This property specifies the public host name of the zone cluster node on the global cluster node specified by the physical-host property.
net – This resource specifies a network address and physical interface name for public network communication by the zone cluster node on the global cluster node specified by physical-host.
In all the examples, the zoneclustername is sczone. The first global-cluster node is phys-schost-1 and the second node is phys-schost-2. The first zone-cluster node is zc-host-1 and the second one is zc-host-2.
Example 1 Creating a New Zone Cluster
The following example demonstrates how to create a two-node zone cluster comprised of whole-root zones. The /usr/local directory contains only executable files, is loopback mounted read-only into the zone cluster nodes as /opt/local, and is managed by Oracle Solaris software. Two IP addresses are exported to the zone cluster for use as highly-available IP addresses. A ZFS data set is exported to the zone cluster for use as a highly-available ZFS file system. Memory capping is used to limit the amount of memory that can be used in the zone cluster. The proc_priocnlt and proc_clock_highres privileges are added to the zone cluster to enable Oracle RAC to run. Default system identification values are used, except for the root password.
A UFS file system is exported to the zone cluster for use as a highly-available file system. It is assumed that the UFS file system is created on an Oracle Solaris Volume Manager metadevice.
phys-schost-1#clzonecluster configure sczone sczone: No such zone cluster configured Use 'create' to begin configuring a new zone cluster. clzc:sczone> create clzc:sczone> set zonepath=/zones/sczone clzc:sczone> set limitpriv="default,proc_priocntl,proc_clock_highres" clzc:sczone> add node clzc:sczone:node> set physical-host=phys-schost-1 clzc:sczone:node> set hostname=zc-host-1 clzc:sczone:node> add net clzc:sczone:node:net> set address=zc-host-1 clzc:sczone:node:net> set physical=bge0 clzc:sczone:node:net> end clzc:sczone:node> end clzc:sczone> add node clzc:sczone:node> set physical-host=phys-schost-2 clzc:sczone:node> set hostname=zc-host-2 clzc:sczone:node> add net clzc:sczone:node:net> set address=zc-host-2 clzc:sczone:node:net> set physical=bge0 clzc:sczone:node:net> end clzc:sczone:node> end clzc:sczone> add net clzc:sczone:net> set address=192.168.0.1 clzc:sczone:net> end clzc:sczone> add net clzc:sczone:net> set address=192.168.0.2 clzc:sczone:net> end clzc:sczone> add fs clzc:sczone:fs> set dir=/opt/local clzc:sczone:fs> set special=/usr/local clzc:sczone:fs> set type=lofs clzc:sczone:fs> add options [ro,nodevices] clzc:sczone:fs> set cluster-control=false clzc:sczone:fs> end clzc:sczone> add dataset clzc:sczone:dataset> set name=tank/home clzc:sczone:dataset> end clzc:sczone> add capped-memory clzc:sczone:capped-memory> set physical=3G clzc:sczone:capped-memory> set swap=4G clzc:sczone:capped-memory> set locked=3G clzc:sczone:capped-memory> end clzc:sczone> add fs clzc:sczone:fs> set dir=/data/ha-data clzc:sczone:fs> set special=/dev/md/ha-set/dsk/d10 clzc:sczone:fs> set raw=/dev/md/ha-set/rdsk/d10 clzc:sczone:fs> set type=ufs clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit
The zone cluster is now configured. The following commands install and then boot the zone cluster from a global-cluster node:
phys-schost-1# clzonecluster install sczone
phys-schost-1# clzonecluster boot sczone
Example 2 Modifying an Existing Zone Cluster
The following example shows how to modify the configuration of the zone cluster created in Example 1. A multi-owner Solaris Volume Manager for Oracle Solaris Cluster metadevice is added to the zone cluster. The set number of the metaset is 1, and the set name is oraset. An additional public IP address is added to the zone-cluster node on phys-schost-2. A shared QFS file system is also added to the configuration. Note that the special property of a QFS file system must be set to the name of the MCF file. The raw property must be left unspecified.
A UFS file system is exported to the zone cluster for use as a highly-available file system. It is assumed that the UFS file system is created on an Oracle Solaris Volume Manager metadevice.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add device clzc:sczone:device> set match=/dev/md/1/dsk/d100 clzc:sczone:device> end clzc:sczone> add device clzc:sczone:device> set match=/dev/md/oraset/dsk/d100 clzc:sczone:device> end clzc:sczone> select node physical-host=phys-schost-2 clzc:sczone:node> add net clzc:sczone:node:net> set address=192.168.0.3/24 clzc:sczone:node:net> set physical=bge0 clzc:sczone:node:net> end clzc:sczone:node> end clzc:sczone> add fs clzc:sczone:fs> set dir=/qfs/ora_home clzc:sczone:fs> set special=oracle_home clzc:sczone:fs> set type=samfs clzc:sczone:fs> end clzc:sczone> exit
Example 3 Creating a New Zone Cluster Using an Existing Zone Cluster as a Template
The following example shows how to create a zone cluster called sczone1, using the sczone zone cluster created in Example 1 as a template. The new zone cluster's configuration will be the same as the original zone cluster. Some properties of the new zone cluster need to be modified to avoid conflicts. When the administrator removes a resource type without specifying a specific resource, the system removes all resources of that type. For example, remove net causes the removal of all net resources.
phys-schost-1# clzonecluster configure sczone1 sczone1: No such zone cluster configured Use 'create' to begin configuring a new zone cluster. clzc:sczone1> create -t sczone clzc:sczone1>set zonepath=/zones/sczone1 clzc:sczone1> select node physical-host=phys-schost-1 clzc:sczone1:node> set hostname=zc-host-3 clzc:sczone1:node> select net address=zc-host-1 clzc:sczone1:node:net> set address=zc-host-3 clzc:sczone1:node:net> end clzc:sczone1:node> end clzc:sczone1> select node physical-host=phys-schost-2 clzc:sczone1:node> set hostname=zc-host-4 clzc:sczone1:node> select net address=zc-host-2 clzc:sczone1:node:net> set address=zc-host-4 clzc:sczone1:node:net> end clzc:sczone1:node> remove net address=192.168.0.3/24 clzc:sczone1:node> end clzc:sczone1> remove dataset name=tank/home clzc:sczone1> remove net clzc:sczone1> remove device clzc:sczone1> remove fs dir=/qfs/ora_home clzc:sczone1> exit
The following operands are supported:
The name of the zone cluster. You specify the name of the new zone cluster. The zoneclustername operand is supported for all subcommands.
All nodes in the cluster. The + operand is supported only for a subset of subcommands.
The complete set of exit status codes for all commands in this command set are listed on the Intro(1CL) man page.
If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command processes the next operand in the operand list. The returned exit code always reflects the error that occurred first.
This command returns the following exit status codes:
No error.
The command that you issued completed successfully.
Not enough swap space.
A cluster node ran out of swap memory or ran out of other operating system resources.
Invalid argument.
You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the -i option was incorrect.
Internal error was encountered.
No such object
The object that you specified cannot be found for one of the following reasons:
The object does not exist.
A directory in the path to the configuration file that you attempted to create with the -o option does not exist.
The configuration file that you attempted to access with the -i option contains errors.
Operation not allowed
You tried to perform an operation on an unsupported configuration, or you performed an unsupported operation.
See attributes(5) for descriptions of the following attributes:
|
cluster(1CL), Intro(1CL), scinstall(1M), clnode(1CL), zoneadm(1M), zonecfg(1M)
The superuser can run all forms of this command.
All users can run this command with the -? (help) or -V (version) option.
To run the clzonecluster command with subcommands, users other than superuser require RBAC authorizations. See the following table.
|