JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Reference Manual     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

Introduction

OSC33 1

OSC33 1cl

claccess(1CL)

cldev(1CL)

cldevice(1CL)

cldevicegroup(1CL)

cldg(1CL)

clinterconnect(1CL)

clintr(1CL)

clmib(1CL)

clnas(1CL)

clnasdevice(1CL)

clnode(1CL)

clps(1CL)

clpstring(1CL)

clq(1CL)

clquorum(1CL)

clreslogicalhostname(1CL)

clresource(1CL)

clresourcegroup(1CL)

clresourcetype(1CL)

clressharedaddress(1CL)

clrg(1CL)

clrs(1CL)

clrslh(1CL)

clrssa(1CL)

clrt(1CL)

clsetup(1CL)

clsnmphost(1CL)

clsnmpmib(1CL)

clsnmpuser(1CL)

clta(1CL)

cltelemetryattribute(1CL)

cluster(1CL)

clzc(1CL)

clzonecluster(1CL)

OSC33 1ha

OSC33 1m

OSC33 3ha

OSC33 4

OSC33 5

OSC33 5cl

OSC33 7

OSC33 7p

Index

clzonecluster

, clzc

- create and manage zone clusters

Synopsis

/usr/cluster/bin/clzonecluster [subcommand] -?
/usr/cluster/bin/clzonecluster -V
/usr/cluster/bin/clzonecluster subcommand [options] -v [zoneclustername]
/usr/cluster/bin/clzonecluster boot [-n nodename[,...]] {+ | zoneclustername [...]}
/usr/cluster/bin/clzonecluster clone -Z target-zoneclustername [-m method] 
[-n nodename[,...]] { source-zoneclustername }
/usr/cluster/bin/clzonecluster configure [-f commandfile] zoneclustername
/usr/cluster/bin/clzonecluster delete [-F] zoneclustername
/ust/cluster/bin/clzonecluster export [-f commandfile] zoneclustername
/usr/cluster/bin/clzonecluster halt [-n nodename[,...]] {+ | zoneclustername}
/usr/cluster/bin/clzonecluster install [-c config_profile.xml] [-n nodename[,...]] 
zoneclustername
/usr/cluster/bin/clzonecluster list [+ | zoneclustername [...]]
/usr/cluster/bin/clzonecluster move -f zonepath zoneclustername
/usr/cluster/bin/clzonecluster ready [-n nodename[,...]] {+ | zoneclustername [...]}
/usr/cluster/bin/clzonecluster reboot [-n nodename[,...]] {+ | zoneclustername [...]}
/usr/cluster/bin/clzonecluster status [+ | zoneclustername [...]]
/usr/cluster/bin/clzonecluster uninstall [-F] [-n nodename[,...]] zoneclustername
/usr/cluster/bin/clzonecluster verify [-n nodename[,...]] {+ | zoneclustername [...]}

Description

The clzonecluster command creates and modifies zone clusters for Oracle Solaris Cluster configurations. The clzc command is the short form of the clzonecluster command; the commands are identical. The clzonecluster command is cluster-aware and supports a single source of administration. You can issue all forms of the command from one node to affect a single zone-cluster node or all nodes.

You can omit subcommand only if options is the -? option or the -V option.

The subcommands require at least one operand, except for the list, show, and status subcommands. However, many subcommands accept the plus sign operand (+) to apply the subcommand to all applicable objects. The clzonecluster commands can be run on any node of a zone cluster and can affect any or all of the zone cluster.

Each option has a long and a short form. Both forms of each option are given with the description of the option in OPTIONS.

SUBCOMMANDS

The following subcommands are supported:

boot

Boots the zone cluster.

The boot subcommand boots the zone cluster. The boot subcommand uses the -n flag to boot the zone cluster for a specified list of nodes. You can use the boot subcommand only from a global-cluster node.

clone

Clones the zone cluster.

The clone command installs a zone cluster by copying an existing installed zone cluster. This subcommand is an alternative to installing a zone cluster. The clone subcommand does not itself create the new zone cluster. Ensure that the source zone cluster used for cloning is in the Installed state (not running) before you clone it. You must first use the configure subcommand to create the new zone cluster. Then use the clone subcommand to apply the cloned configuration to the new zone cluster.

You can use the clone subcommand only from a global-cluster node.

configure

Launches an interactive utility to configure a cluster brand zone cluster.

The configure subcommand uses the zonecfg command to configure a zone on each specified machine. The configure subcommand lets you specify properties that apply to each node of the zone cluster. These properties have the same meaning as established by the zonecfg command for individual zones. The configure subcommand supports the configuration of properties that are unknown to the zonecfg command.

The configure subcommand launches an interactive shell if you do not specify the -f option. The -f option takes a command file as its argument. The configure subcommand uses this file to create or modify zone clusters non-interactively.

You can use the configure subcommand only from a global-cluster node. For more information, see Oracle Solaris Cluster Software Installation Guide.

Both the interactive and non-interactive forms of the configure command support several subcommands to edit the zone cluster configuration. See zonecfg(1M) for a list of available configuration subcommands.

The interactive configure utility enables you to create and modify the configuration of a zone cluster. Zone-cluster configuration consists of a number of resource types and properties. The configure utility uses the concept of scope to determine where the subcommand applies. There are three levels of scope that are used by the configure utility: cluster, resource, and node-specific resource. The default scope is cluster. The following list describes the three levels of scope:

  • Cluster scope – Properties that affect the entire zone cluster. If the zoneclustername is sczone, the interactive shell of the clzonecluster command looks similar to the following:

    clzc:sczone>
  • Node scope – A special resource scope that is nested inside the node resource scope. Settings inside the node scope affect a specific node in the zone cluster. For example, you can add a net resource to a specific node in the zone cluster. The interactive shell of the clzonecluster command looks similar to the following:

    clzc:sczone:node:net>
  • Resource scope – Properties that apply to one specific resource. A resource scope prompt has the name of the resource type appended. For example, the interactive shell of the clzonecluster command looks similar to the following:

    clzc:sczone:net>
delete

Removes a specific zone cluster.

This subcommand deletes a specific zone cluster. When you use a wild card operand (*), the delete command removes the zone clusters that are configured on the global cluster. The zone cluster must be in the configured state before you run the delete subcommand. You can use the delete subcommand only from a global-cluster node.

export

Exports the zone cluster configuration into a command file.

The exported commandfile can be used as the input for the configure subcommand. Modify the file as needed to reflect the configuration that you want to create. See the clconfiguration(5CL) man page for more information.

You can use the export subcommand only from a global-cluster node.

halt

Stops a zone cluster or a specific node on the zone cluster.

When you specify a specific zone cluster, the halt subcommand applies only to that specific zone cluster. You can halt the entire zone cluster or just halt specific nodes of a zone cluster. If you do not specify a zone cluster, the halt subcommand applies to all zone clusters. You can also halt all zone clusters on specified machines.

The halt subcommand uses the -n option to halt zone clusters on specific nodes. By default, the halt subcommand stops all zone clusters on all nodes. If you specify the + operand in place of a zone name, all the zone clusters are stopped. You can use the halt subcommand only from a global-cluster node.

install

Installs a zone cluster.

This subcommand installs a zone cluster. You can use the install subcommand only from a global-cluster node.

list

Displays the names of configured zone clusters.

This subcommand reports the names of zone clusters that are configured in the cluster.

  • If you run the list subcommand from a global-cluster node, the subcommand displays a list of all the zone clusters in the global cluster.

  • If you run the list subcommand from a zone-cluster node, the subcommand displays only the name of the zone cluster.

To see the list of nodes where the zone cluster is configured, use the -v option.

move

Moves the zonepath to a new zonepath.

This subcommand moves the zonepath to a new zonepath. You can use the move subcommand only from a global-cluster node.

ready

Prepares the zone for applications.

This subcommand prepares the zone for running applications. You can use the ready subcommand only from a global-cluster node.

reboot

Reboots a zone cluster.

This subcommand reboots the zone cluster and is similar to issuing a halt subcommand, followed by a boot subcommand. See the halt subcommand and the boot subcommand for more information. You can use the reboot subcommand only from a global-cluster node.

status

Determines whether the zone-cluster node is a member of the zone cluster.

The zone state can be one of the following: Configured, Installed, Ready, Running, and Shutting Down. The state of all the zone clusters in the global cluster is displayed so you can see the state of your virtual cluster. You can use the status subcommand only from a global-cluster node.

To check zone activity, instead use the zoneadm command.

uninstall

Uninstalls a zone cluster.

This subcommand uninstalls a zone cluster. The uninstall subcommand uses the zoneadm command. You can use the uninstall subcommand only from a global-cluster node.

verify

Checks that the syntax of the specified information is correct.

This subcommand invokes the zoneadm verify command on each node in the zone cluster to ensure that each zone cluster member can be installed safely. For more information, see zoneadm(1M). You can use the verify subcommand only from a global-cluster node.

Options


Note - The short and long form of each option are shown in this section.


The following options are supported:

-?
--help

Displays help information.

You can specify this option with or without a subcommand.

If you do not specify a subcommand, the list of all available subcommands is displayed.

If you specify a subcommand, the usage for that subcommand is displayed.

If you specify this option and other options, the other options are ignored.

-f{commandfile | zonepath}
--file-argument {commandfile | zonepath}

When used with the configure subcommand, the -f option specifies the command file argument. For example, clzonecluster configure -f commandfile. When used with the move subcommand, the -f option specifies the zonepath.

-F
--force

You can use the -F option during delete and uninstall operations. The -F option forcefully suppresses the Are you sure you want to do this operation [y/n]? questions.

-m method
--method copymethod

Use the method option to clone a zone cluster. The only valid method for cloning is the copy command. Before you run the clone subcommand, you must halt the source zone cluster.

-n nodename[…]
--nodelist nodename[,…]

Specifies the node list for the subcommand.

For example, clzonecluster boot -n phys-schost-1, phys-schost-2 zoneclustername.

-v
---verbose

Displays verbose information on the standard output (stdout) .

-V
--version

Displays the version of the command.

If you specify this option with other options, with subcommands, or with operands, they are all ignored. Only the version of the command is displayed. No other processing occurs.

-Z target-zoneclustername
--zonecluster target-zoneclustername

The zone cluster name that you want to clone.

Use the source zone-cluster name for cloning. The source zone cluster must be halted before you use this subcommand.

Resources and Properties

The clzonecluster command supports several resources and properties for zone clusters.

You must use the clzonecluster command to configure any resources and properties that are supported by the clzonecluster command. See the zonecfg(1M) man page for more information on configuring resources or properties that are not supported by the clzonecluster command.

The following subsections, Resources and Properties, describe those resources and properties that are supported by the clzonecluster command.

Resources

The following lists the resource types that are supported in the resource scope and where to find more information:

capped-cpu

For more information, see the zonecfg(1M) man page. This resource can be used in both the cluster scope and the node scope. This resource is passed down to the individual Oracle Solaris zone level. When the resource is specified in both cluster and node scope, the node scope resource information is passed down to the Oracle Solaris zone of the specific node of the zone cluster.

capped-memory

For more information, see the zonecfg(1M) man page. This resource can be used in the cluster scope and the node scope. This resource is passed down to the individual Oracle Solaris zone level. When the resource is specified in both cluster and node scope, the node scope resource information is passed down to the Oracle Solaris zone of the specific node of the zone cluster.

dataset

For more information, see the zonecfg(1M) man page. This resource can be used in the cluster scope or the node scope. You cannot specify a data set in both cluster and node scope.

The resource in cluster scope is used to export a ZFS data set to be used in the zone cluster for a highly-available ZFS file system. The exported data set is managed by the Oracle Solaris Cluster software, and is not passed down to the individual Oracle Solaris zone level when specified in the cluster scope. A data set cannot be shared between zone clusters.

The resource in node scope is used to export a local ZFS dataset to a specific zone-cluster node. The exported data set is not managed by the Oracle Solaris Cluster software, and is passed down to the individual Oracle Solaris zone level when specified in the node scope.

dedicated-cpu

For more information, see the zonecfg(1M) man page. You can use a fixed number of CPUs that are dedicated to the zone cluster on each node.

This resource can be used in the cluster scope and the node scope. This resource is passed down to the individual Oracle Solaris zone level. When the resource is specified in both cluster and node scope, the node scope resource information is passed down to the Oracle Solaris zone of the specific node of the zone cluster.

device

For more information, see the zonecfg(1M) man page. This resource is passed down to the individual Oracle Solaris zone level and can be specified in the cluster scope or the node scope. The resource in the node scope is used to add devices specific to a zone-cluster node. You can add a device to only one zone cluster. You cannot add the same device to both the cluster scope and the node scope.

fs

For more information, see the zonecfg(1M) man page. You can specify this resource in the cluster scope or the node scope. You cannot specify the fs resource in both cluster and node scope.

The resource in cluster scope is generally used to export a file system to be used in the zone cluster. The exported file system is managed by the Oracle Solaris Cluster software, and is not passed down to the individual Oracle Solaris zone level, except for an lofs file system with the cluster-control property set to false. For more information about the cluster-control property, see the description for fs in the Resources section of this man page.

The resource in node scope is used to export a local file system to a specific zone-cluster node. The exported file system is not managed by the Oracle Solaris Cluster software, and is passed down to the individual Oracle Solaris zone level when specified in the node scope.

You can export a file system to a zone cluster by using either a direct mount or a loopback mount. A direct mount makes the file system accessible inside the zone cluster by mounting the specified file system at a location that is under the root of the zone, or some subdirectory that has the zone root in its path. A direct mount means that the file system belongs exclusively to this zone cluster. When a zone cluster runs on Oracle Solaris Trusted Extensions, the use of direct mounts is mandatory for files mounted with both read and write privileges.

Zone clusters support direct mounts for UFS, QFS standalone file system, QFS shared file system, Oracle ASM Cluster file system (ACFS), and ZFS (exported as a data set).

A loopback mount is a mechanism for making a file system already mounted in one location appear to be mounted in another location. You can export a single file system to multiple zone clusters through the use of one loopback mount per zone cluster. This makes it possible to share a single file system between multiple zone clusters. The administrator must consider the security implications before sharing a file system between multiple zone clusters. Regardless of how the real file system is mounted, the loopback mount can restrict access to read-only.

fs: cluster-control

The cluster-control property applies only to loopback mounts specified in the cluster scope. The default value for the cluster-control property is true.

When the property value is true, Oracle Solaris Cluster manages this file system and does not pass the file system information to the zonecfg command. Oracle Solaris Cluster mounts and unmounts the file system in the zone-cluster node as needed after the zone boots.

Oracle Solaris Cluster can manage loopback mounts for QFS shared file systems, UFS, QFS standalone file systems, and PxFS on UFS.

When the property value is false, Oracle Solaris Cluster does not manage the file system. The cluster software passes this file system information and all associated information to the zonecfg command, which creates the zone cluster zone on each machine. In this case, the Oracle Solaris software mounts the file system when the zone boots. The administrator can use this option with the UFS file systems.

The administrator can specify a loopback mount in the cluster scope. Configuring the loopback mount with a cluster-control property value of false is useful for read-only mounts of common local directories (such as directories that contain executable files). This information is passed to the zonecfg command, which performs the actual mounts. Configuring the loopback mount with a cluster-control property value of true is useful for making the global file systems (PxFS) or shared QFS file systems available to a zone cluster that is under cluster control.

QFS shared file sytems, UFS, VxFS, QFS standalone file systems, and ZFS are configured in at most one zone cluster.

inherit-pkg-dir

See zonecfg(1M).

net

For more information about net resources, see the zonecfg(1M) man page.

Any net resource managed by Oracle Solaris Cluster, such as Logical Host or Shared Address, is specified in the cluster scope. Any net resource managed by an application, such as an Oracle RAC VIP, is specified in the cluster scope. These net resources are not passed to the individual Oracle Solaris zone level.

The administrator can specify the Network Interface Card (NIC) to use with the specified IP Address. The system automatically selects a NIC that satisfies the following two requirements:

  • The NIC already connects to the same subnet.

  • The NIC has been configured for this zone cluster.

node

The node resource performs the following two purposes:

  • Identifies a scope level. Any resource specified in a node scope belongs exclusively to this specific node.

  • Identifies a node of the zone cluster. The administrator identifies the machine where the zone will run by identifying the global cluster global zone on that machine. Specifying an IP address and NIC for each zone cluster node is optional. The administrator also specifies information identifying network information for reaching this node.


Note - If the administrator does not configure an IP address for each zone cluster node, two things will occur:

  1. That specific zone cluster will not be able to configure NAS devices for use in the zone cluster. The cluster uses the IP address of the zone cluster node when communicating with the NAS device, so not having an IP address prevents cluster support for fencing NAS devices.

  2. The cluster software will activate any Logical Host IP address on any NIC.


rctl

For more information, see the zonecfg(1M) man page. This resource can be used in both the cluster scope and the node scope. This resource is passed down to the individual Oracle Solaris zone level. When the resource is specified in both cluster and node scope, the node scope resource information is passed down to the Oracle Solaris zone of the specific node of the zone cluster.

sysid

For more information, see thesysidcfg(4) man page. This resource specifies the system identification parameters for all zones of the zone cluster.

Properties

Each resource type has one or more properties. The following properties are supported for cluster:

(cluster)

autoboot

For more information, see thezonecfg(1M) man page.

(cluster)

bootargs

For more information, see the zonecfg(1M) man page.

(cluster)

brand

For more information,zonecfg(1M) man page. cluster is the only brand type supported for a zone cluster.

(cluster)

cpu-shares

For more information, see the zonecfg(1M) man page.

(cluster)

enable_priv_net

When set to true, Oracle Solaris Cluster private network communication is enabled between the nodes of the zone cluster. The Oracle Solaris Cluster private hostnames and IP addresses for the zone cluster nodes are automatically generated by the system. Private network is disabled if the value is set to false. The default value is true.

(cluster)

ip-type

For more information, see the zonecfg(1M) man page. ip-type is the only value supported.

(cluster)

limitpriv

For more information, see the zonecfg(1M) man page.

(cluster)

max-lwps

For more information, see the zonecfg(1M) man page.

(cluster)

max-msg-ids

For more information, see the zonecfg(1M) man page.

(cluster)

max-sem-ids

For more information, see the zonecfg(1M) man page.

(cluster)

max-shm-ids

For more information, see the zonecfg(1M) man page.

(cluster)

max-shm-memory

For more information, see the zonecfg(1M) man page.

(cluster)

pool

For more information, see thezonecfg(1M) man page.

(cluster)

zonename

The name of the zone cluster, as well as the name of each zone in the zone cluster.

(cluster)

zonepath

The zonepath of each zone in the zone cluster.

capped-cpu

For more information, see the zonecfg(1M) man page.

capped-memory

For more information, see the zonecfg(1M) man page.

dataset

For more information, see the zonecfg(1M) man page.

dedicated-cpu

For more information, see the zonecfg(1M) man page.

device

See zonecfg(1M) man page.

fs

For more information, see the zonecfg(1M) man page.

inherit pkg-dir

For more information, see the zonecfg(1M) man page.

net

For more information, see the zonecfg(1M) man page.

node

Includes physical-host, hostname, and net.

  • physical-host – This property specifies a global cluster node that will host a zone-cluster node.

  • hostname – This property specifies the public host name of the zone-cluster node on the global cluster node specified by the physical-host property.

  • net – This resource specifies a network address and physical interface name for public network communication by the zone-cluster node on the global cluster node specified by physical-host.

rctl

For more information, see the zonecfg(1M) man page.

sysid

For more information, see the sysidcfg(4) man page. Includes root_password, name_service, security_policy, system_locale, timezone, terminal, and nfs4_domain. The administrator can later manually change any sysidcfg value following the normal Oracle Solaris procedures one node at a time.

  • root_password – This property specifies the encrypted value of the common root password for all nodes of the zone cluster. Do not specify a clear text password. Encrypted password string from /etc/shadow must be used. This is a required property.

  • name_service – This property specifies the naming service to be used in the zone cluster. It is an optional property, and the setting in the global zone is used by default. However, the settings in the global zone's /etc/sysidcfg file might be stale. To ensure that this property has the correct setting, enter the value manually by using the clzonecluster command.

  • security_policy – The value is set to none by default.

  • system_locale – The value is obtained from the environment of the clzonecluster command by default.

  • timezone – The time zone to be used in the zone cluster. The value is obtained from the environment of the clzonecluster command by default.

  • terminal – The value is set to xterm by default.

  • nfs4_domain – The value is set to dynamic by default.

Examples

In all the examples, the zoneclustername is sczone. The first global-cluster node is phys-schost-1 and the second node is phys-schost-2. The first zone-cluster node is zc-host-1 and the second one is zc-host-2.

Example 1 Creating a New Zone Cluster

The following example demonstrates how to create a two-node zone cluster comprised of sparse-root zones. The /usr/local directory contains only executable files, is loopback mounted read-only into the zone cluster nodes as /opt/local, and is managed by Oracle Solaris software. Two IP addresses are exported to the zone cluster for use as highly-available IP addresses. A ZFS data set is exported to the zone cluster for use as a highly-available ZFS file system. Memory capping is used to limit the amount of memory that can be used in the zone cluster. The proc_priocnlt and proc_clock_highres privileges are added to the zone cluster to enable Oracle RAC to run. Default system identification values are used, except for the root password.

A UFS file system is exported to the zone cluster for use as a highly-available file system. It is assumed that the UFS file system is created on an Oracle Solaris Volume Manager metadevice.

phys-schost-1#clzonecluster configure sczone
sczone: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:sczone> create
clzc:sczone> set zonepath=/zones/sczone
clzc:sczone> set limitpriv="default,proc_priocntl,proc_clock_highres"
clzc:sczone> add sysid
clzc:sczone:sysid> set root_password=xxxxxxxxxxxxx
clzc:sczone:sysid> end
clzc:sczone> add node
clzc:sczone:node> set physical-host=phys-schost-1
clzc:sczone:node> set hostname=zc-host-1
clzc:sczone:node> add net
clzc:sczone:node:net> set address=zc-host-1
clzc:sczone:node:net> set physical=bge0
clzc:sczone:node:net> end
clzc:sczone:node> end
clzc:sczone> add node
clzc:sczone:node> set physical-host=phys-schost-2
clzc:sczone:node> set hostname=zc-host-2
clzc:sczone:node> add net
clzc:sczone:node:net> set address=zc-host-2
clzc:sczone:node:net> set physical=bge0
clzc:sczone:node:net> end
clzc:sczone:node> end
clzc:sczone> add net
clzc:sczone:net> set address=192.168.0.1
clzc:sczone:net> end
clzc:sczone> add net
clzc:sczone:net> set address=192.168.0.2
clzc:sczone:net> end
clzc:sczone> add fs
clzc:sczone:fs> set dir=/opt/local
clzc:sczone:fs> set special=/usr/local
clzc:sczone:fs> set type=lofs
clzc:sczone:fs> add options [ro,nodevices]
clzc:sczone:fs> set cluster-control=false
clzc:sczone:fs> end
clzc:sczone> add dataset
clzc:sczone:dataset> set name=tank/home
clzc:sczone:dataset> end
clzc:sczone> add capped-memory
clzc:sczone:capped-memory> set physical=3G
clzc:sczone:capped-memory> set swap=4G
clzc:sczone:capped-memory> set locked=3G
clzc:sczone:capped-memory> end
clzc:sczone> add fs
clzc:sczone:fs> set dir=/data/ha-data
clzc:sczone:fs> set special=/dev/md/ha-set/dsk/d10
clzc:sczone:fs> set raw=/dev/md/ha-set/rdsk/d10
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> end 
clzc:sczone> verify
clzc:sczone> commit
clzc:sczone> exit

The zone cluster is now configured. The following commands install and then boot the zone cluster from a global-cluster node:

phys-schost-1# clzonecluster install sczone
phys-schost-1# clzonecluster boot sczone

Example 2 Modifying an Existing Zone Cluster

The following example shows how to modify the configuration of the zone cluster created in Example 1. A multi-owner Solaris Volume Manager for Oracle Solaris Cluster metadevice is added to the zone cluster. The set number of the metaset is 1, and the set name is oraset. An additional public IP address is added to the zone-cluster node on phys-schost-2. A shared QFS file system is also added to the configuration. Note that the special property of a QFS file system must be set to the name of the MCF file. The raw property must be left unspecified.

A UFS file system is exported to the zone cluster for use as a highly-available local file system. The UFS file system is created on an Oracle Solaris Volume Manager metadevice. A device identity (DID) device that will be used as a shared disk in the zone cluster is added to the zone cluster.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> select node physical-host=phys-schost-2
clzc:sczone:node> add net
clzc:sczone:node:net> set address=192.168.0.3/24
clzc:sczone:node:net> set physical=bge0
clzc:sczone:node:net> end
clzc:sczone:node> end
clzc:sczone> add fs
clzc:sczone:fs> set dir=/data/ha-web
clzc:sczone:fs> set special=/dev/md/ha-web/dsk/d10
clzc:sczone:fs> set raw=/dev/md/ha-web/rdsk/d10
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> end
clzc:sczone> add device
clzc:sczone:device> set match=/dev/did/*dsk/d10s*
clzc:sczone:device> end
clzc:sczone> exit

Example 3 Creating a New Zone Cluster Using an Existing Zone Cluster as a Template

The following example shows how to create a zone cluster called sczone1, using the sczone zone cluster created in Example 1 as a template. The new zone cluster's configuration will be the same as the original zone cluster. Some properties of the new zone cluster need to be modified to avoid conflicts. When the administrator removes a resource type without specifying a specific resource, the system removes all resources of that type. For example, remove net causes the removal of all net resources, while remove net address removes a specific address.

You can specify the file system at the global scope (to be managed by logical hosts or highly available resources), or for just the node (managed by the zone itself). You should change or remove conflicting values for each scope.

You can specify the file system at the global scope (to be managed by logical hosts or highly available resources), or for just the node (managed by the zone itself). You should change or remove conflicting values for each scope.

phys-schost-1# clzonecluster configure sczone1
sczone1: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.

clzc:sczone1> create -t sczone
clzc:sczone1>set zonepath=/zones/sczone1
clzc:sczone1>set brand=cluster
clzc:sczone1> select node physical-host=phys-schost-1
clzc:sczone1:node> set hostname=zc-host-3
clzc:sczone1:node> select net address=zc-host-1
clzc:sczone1:node:net> set address=zc-host-3
clzc:sczone1:node:net> end
clzc:sczone1:node> end
clzc:sczone1> select node physical-host=phys-schost-2
clzc:sczone1:node> set hostname=zc-host-4
clzc:sczone1:node> select net address=zc-host-2
clzc:sczone1:node:net> set address=zc-host-4
clzc:sczone1:node:net> end
clzc:sczone1:node> remove net address=192.192.0.1/2
clzc:sczone1:node> remove dataset
clzc:sczone1:node> remove device
clzc:sczone1:node> remove fs
clzc:sczone1:node> end
clzc:sczone1> remove dataset name=tank/home
clzc:sczone1> remove net
clzc:sczone1> remove device
clzc:sczone1> remove fs dir=/qfs/ora_home
clzc:sczone1> exit

Example 4 Using a Configuration File to Create a Zone Cluster

The following example shows the contents of a command file that can be used with the clzonecluster utility to create a zone cluster. The file contains the series of clzonecluster commands that you would input manually.

In the following configuration, the zone cluster sczone is created on the global-cluster node phys-schost-1. The zone cluster uses /zones/sczone as the zone path and the public IP address 12.13.1.1. The first node of the zone cluster is assigned the hostname zc-host-1 and uses the network address 12.13.1.2 and the bge0 adapter. The second node of the zone cluster is created on the global-cluster node phys-schost-2. This second zone-cluster node is assigned the hostname zc-host-2 and uses the network address 12.13.1.3 and the bge1 adapter.

create
set zonepath=/zones/sczone
add net
set address=12.13.1.1
end
add node
set physical-host=phys-schost-1
set hostname=zc-host-1
add net
set address=12.13.5.2
set physical=bge0
end
end
add node
set physical-host=phys-schost-2
set hostname=zc-host-2
add net
set address=12.13.7.5
set physical=bge1
end
end
commit
exit

Example 5 Adding Node Scope Resources to a Zone Cluster

The following example shows how to add node scope resources to the zone cluster created in Example 1. The resources are added to the zone-cluster node on phys-schost-2. A UFS file system, ZFS data set, and disk device are exported to the zone-cluster node.

phys-schost-1# clzonecluster configure sczone
clzc:sczone> select node physical-host=phys-schost-2
clzc:sczone:node> add fs
clzc:sczone:node:fs> set dir=/data/local-data
clzc:sczone:node:fs> set special=/dev/dsk/c3t2d0s0
clzc:sczone:node:fs> set raw=/dev/rdsk/c3t2d0s0
clzc:sczone:node:fs> set type=ufs
clzc:sczone:node:fs> end
clzc:sczone:node> add dataset
clzc:sczone:node:dataset> set name=localpool/home
clzc:sczone:node:dataset> end
clzc:sczone:node> add device
clzc:sczone:node:device> set match=/dev/*dsk/c3t4d2*
clzc:sczone:node:device> end
clzc:sczone:node> end
clzc:sczone> exit

Example 6 Creating a Whole-Root Zone Cluster

The following example shows the creation of a new zone cluster, sczone2, but now the constituent zones will be whole-root zones.

phys-schost-1# clzonecluster configure sczone2
sczone2: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:sczone2> create -b
...
Follow the steps in Example 1 for the rest of the configuration
...
clzc:sczone2> exit

Operands

The following operands are supported:

zoneclustername

The name of the zone cluster. You specify the name of the new zone cluster. The zoneclustername operand is supported for all subcommands.

+

All nodes in the cluster. The + operand is supported only for a subset of subcommands.

Exit Status

The complete set of exit status codes for all commands in this command set are listed on the Intro(1CL) man page.

If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command processes the next operand in the operand list. The returned exit code always reflects the error that occurred first.

This command returns the following exit status codes:

0 CL_NOERR

No error.

The command that you issued completed successfully.

1 CL_ENOMEM

Not enough swap space.

A cluster node ran out of swap memory or ran out of other operating system resources.

3 CL_EINVAL

Invalid argument.

You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the -i option was incorrect.

18CL_EINTERNAL

Internal error was encountered.

36 CL_ENOENT

No such object

The object that you specified cannot be found for one of the following reasons:

  • The object does not exist.

  • A directory in the path to the configuration file that you attempted to create with the -o option does not exist.

  • The configuration file that you attempted to access with the -i option contains errors.

37 CL_EOP

Operation not allowed

You tried to perform an operation on an unsupported configuration, or you performed an unsupported operation.

Attributes

See attributes(5) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
SUNWsczu
Interface Stability
Evolving

See Also

cluster(1CL), Intro(1CL), scinstall(1M), clnode(1CL), zoneadm(1M), zonecfg(1M)

Notes

The superuser can run all forms of this command.

All users can run this command with the -? (help) or -V (version) option.

To run the clzonecluster command with subcommands, users other than superuser require RBAC authorizations. See the following table.

Subcommand
RBAC Authorization
boot
solaris.cluster.admin
check
solaris.cluster.read
clone
solaris.cluster.admin
configure
solaris.cluster.admin
delete
solaris.cluster.admin
export
solaris.cluster.read
halt
solaris.cluster.admin
install
solaris.cluster.admin
list
solaris.cluster.read
monitor
solaris.cluster.modify
move
solaris.cluster.admin
ready
solaris.cluster.admin
reboot
solaris.cluster.admin
show
solaris.cluster.read
status
solaris.cluster.read
uninstall
solaris.cluster.admin
unmonitor
solaris.cluster.modify
verify
solaris.cluster.admin