This chapter describes the following topics:
This section provides the following procedures to create a non-global zone on a global-cluster node.
Perform this procedure for each non-global zone that you create in the global cluster.
For complete information about installing a zone, refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
You can configure a Solaris 10 non-global zone, simply referred to as a zone, on a cluster node while the node is booted in either cluster mode or in noncluster mode.
If you create a zone while the node is booted in noncluster mode, the cluster software discovers the zone when the node joins the cluster.
If you create or remove a zone while the node is in cluster mode, the cluster software dynamically changes its list of zones that can master resource groups.
Perform the following tasks:
Plan your non-global zone configuration. Observe the requirements and restrictions in Guidelines for Non-Global Zones in a Global Cluster.
Have available the following information:
The total number of non-global zones that you will create.
The public adapter and public IP address that each zone will use.
The zone path for each zone. This path must be a local file system, not a cluster file system or a highly available local file system.
One or more devices that should appear in each zone.
(Optional) The name that you will assign each zone.
If you will assign the zone a private IP address, ensure that the cluster IP address range can support the additional private IP addresses that you will configure. Use the cluster show-netprops command to display the current private-network configuration.
If the current IP address range is not sufficient to support the additional private IP addresses that you will configure, follow the procedures in How to Change the Private Network Configuration When Adding Nodes or Private Networks to reconfigure the private IP-address range.
For additional information, see Zone Components in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Become superuser on the global-cluster node where you are creating the non-voting node.
You must be working in the global zone.
For the Solaris 10 OS, verify on each node that multiuser services for the Service Management Facility (SMF) are online.
If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default |
Configure, install, and boot the new zone.
You must set the autoboot property to true to support resource-group functionality in the non-voting node on the global cluster.
Follow procedures in the Solaris documentation:
Perform procedures in Chapter 18, Planning and Configuring Non-Global Zones (Tasks), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Perform procedures in Installing and Booting Zones in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Perform procedures in How to Boot a Zone in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Verify that the zone is in the ready state.
phys-schost# zoneadm list -v ID NAME STATUS PATH 0 global running / 1 my-zone ready /zone-path |
For a whole-root zone with the ip-type property set to exclusive: If the zone might host a logical-hostname resource, configure a file system resource that mounts the method directory from the global zone.
phys-schost# zonecfg -z sczone zonecfg:sczone> add fs zonecfg:sczone:fs> set dir=/usr/cluster/lib/rgm zonecfg:sczone:fs> set special=/usr/cluster/lib/rgm zonecfg:sczone:fs> set type=lofs zonecfg:sczone:fs> end zonecfg:sczone> exit |
(Optional) For a shared-IP zone, assign a private IP address and a private hostname to the zone.
The following command chooses and assigns an available IP address from the cluster's private IP-address range. The command also assigns the specified private hostname, or host alias, to the zone and maps it to the assigned private IP address.
phys-schost# clnode set -p zprivatehostname=hostalias node:zone |
Specifies a property.
Specifies the zone private hostname, or host alias.
The name of the node.
The name of the global-cluster non-voting node.
Perform the initial internal zone configuration.
Follow the procedures in Performing the Initial Internal Zone Configuration in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones. Choose either of the following methods:
Log in to the zone.
Use an /etc/sysidcfg file.
In the non-voting node, modify the nsswitch.conf file.
These changes enable the zone to resolve searches for cluster-specific hostnames and IP addresses.
Log in to the zone.
phys-schost# zlogin -c zonename |
Open the /etc/nsswitch.conf file for editing.
sczone# vi /etc/nsswitch.conf |
Add the cluster switch to the beginning of the lookups for the hosts and netmasks entries, followed by the files switch.
The modified entries should appear similar to the following:
… hosts: cluster files nis [NOTFOUND=return] … netmasks: cluster files nis [NOTFOUND=return] … |
For all other entries, ensure that the files switch is the first switch that is listed in the entry.
Exit the zone.
If you created an exclusive-IP zone, configure IPMP groups in each /etc/hostname.interface file that is on the zone.
You must configure an IPMP group for each public-network adapter that is used for data-service traffic in the zone. This information is not inherited from the global zone. See Public Networks for more information about configuring IPMP groups in a cluster.
Set up name-to-address mappings for all logical hostname resources that are used by the zone.
To install an application in a non-global zone, use the same procedure as for a stand-alone system. See your application's installation documentation for procedures to install the software in a non-global zone. Also see Adding and Removing Packages and Patches on a Solaris System With Zones Installed (Task Map) in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
To install and configure a data service in a non-global zone, see the Sun Cluster manual for the individual data service.
Use this procedure to make a cluster file system available for use by a native brand non-global zone that is configured on a cluster node.
Use this procedure with only the native brand of non-global zones. You cannot perform this task with any other brand of non-global zone, such as the solaris8 brand or the cluster brand which is used for zone clusters.
On one node of the global cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Create a resource group with a node list of native brand non-global zones.
Use the following command to create a failover resource group:
phys-schost# clresourcegroup create -n node:zone[,…] resource-group |
Specifies the names of the non-global zones in the resource-group node list.
The name of the resource group that you create.
Use the following command to create a scalable resource group:
phys-schost# clresourcegroup create -S -n node:zone[,…] resource-group |
Specifies that the resource group is scalable.
Register the HAStoragePlus resource type.
phys-schost# clresourcetype register SUNW.HAStoragePlus |
On each global-cluster node where a non-global zone in the node list resides, add the cluster file system entry to the /etc/vfstab file.
Entries in the /etc/vfstab file for a cluster file system must contain the global keyword in the mount options.
Create the HAStoragePlus resource and define the file-system mount points.
phys-schost# clresource create -g resource-group -t SUNW.HAStoragePlus \ -p FileSystemMountPoints="mount-point-list" hasp-resource |
Specifies the name of the resource group that the new resource is added to.
Specifies one or more file-system mount points for the resource.
The name of the HAStoragePlus resource that you create.
The resource is created in the enabled state.
Add a resource to resource-group and set a dependency for the resource on hasp-resource.
If you have more than one resource to add to the resource group, use a separate command for each resource.
phys-schost# clresource create -g resource-group -t resource-type \ -p Network_resources_used=hasp-resource resource |
Specifies the resource type that you create the resource for.
Specifies that the resource has a dependency on the HAStoragePlus resource, hasp-resource.
The name of the resource that you create.
Bring online and in a managed state the resource group that contains the HAStoragePlus resource.
phys-schost# clresourcegroup online -M resource-group |
Specifies that the resource group is managed.
The following example creates a failover resource group, cfs-rg, to manage an HA-Apache data service. The resource-group node list contains two non-global zones, sczone1 on phys-schost-1 and sczone1 on phys-schost-2. The resource group contains an HAStoragePlus resource, hasp-rs, and a data-service resource, apache-rs. The file-system mount point is /global/local-fs/apache.
phys-schost-1# clresourcegroup create -n phys-schost-1:sczone1,phys-schost-2:sczone1 cfs-rg phys-schost-1# clresourcetype register SUNW.HAStoragePlus Add the cluster file system entry to the /etc/vfstab file on phys-schost-1 phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/kappa-1/dsk/d0 /dev/md/kappa-1/rdsk/d0 /global/local-fs/apache ufs 5 yes logging,global Add the cluster file system entry to the /etc/vfstab file on phys-schost-2 phys-schost-2# vi /etc/vfstab … phys-schost-1# clresource create -g cfs-rg -t SUNW.HAStoragePlus \ -p FileSystemMountPoints="/global/local-fs/apache" hasp-rs phys-schost-1# clresource create -g cfs-rg -t SUNW.apache \ -p Network_resources_used=hasp-rs apache-rs phys-schost-1# clresourcegroup online -M cfs-rg |
|
This section provide procedures to configure a cluster of non-global zones.
The clzonecluster utility creates, modifies, and removes a zone cluster. The clzonecluster utility actively manages a zone cluster. For example, the clzonecluster utility both boots and halts a zone cluster. Progress messages for the clzonecluster utility are output to the console, but are not saved in a log file.
The utility operates in the following levels of scope, similar to the zonecfg utility:
The cluster scope affects the entire zone cluster.
The node scope affects only the one zone-cluster node that is specified.
The resource scope affects either a specific node or the entire zone cluster, depending on which scope you enter the resource scope from. Most resources can only be entered from the node scope. The scope is identified by the following prompts:
clzc:zoneclustername:resource> cluster-wide setting clzc:zoneclustername:node:resource> node-specific setting |
You can specify any Solaris zones resource parameter, as well as parameters that are specific to zone clusters, by using the clzonecluster utility. For information about parameters that you can set in a zone cluster, see the clzonecluster(1CL)man page. Additional information about Solaris zones resource parameters is in the zonecfg(1M) man page.
This section describes how to configure a cluster of non-global zones.
Perform this procedure to create a cluster of non-global zones.
Create a global cluster. See Chapter 3, Establishing the Global Cluster.
Read the guidelines and requirements for creating a zone cluster. See Zone Clusters.
Have available the following information:
The unique name to assign to the zone cluster.
The zone path that the nodes of the zone cluster will use. For more information, see the description of the zonepath property in Resource and Property Types in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
The name of each node in the global cluster on which to create a zone-cluster node.
The zone public hostname, or host alias, that you assign to each zone-cluster node.
The public-network IP address that each zone-cluster node uses.
The name of the public-network adapter that each zone-cluster node uses to connect to the public network.
Become superuser on an active member node of a global cluster.
Perform all steps of this procedure from a node of the global cluster.
Ensure that the node of the global cluster is in cluster mode.
If any node is in noncluster mode, changes that you make are propagated when the node returns to cluster mode. Therefore, you can create a zone cluster even if some global-cluster nodes are in noncluster mode. When those nodes return to cluster mode, the system performs zone-cluster creation tasks on those nodes.
phys-schost# clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-2 Online phys-schost-1 Online |
By default, sparse root zones are created. To create whole root zones, add the -b option to the create command.
phys-schost-1# clzonecluster configure zoneclustername clzc:zoneclustername> create Set the zone path for the entire zone cluster clzc:zoneclustername> set zonepath=/zones/zoneclustername Add the first node and specify node-specific settings clzc:zoneclustername> add node clzc:zoneclustername:node> set physical-host=baseclusternode1 clzc:zoneclustername:node> set hostname=hostname1 clzc:zoneclustername:node> add net clzc:zoneclustername:node:net> set address=public_netaddr clzc:zoneclustername:node:net> set physical=adapter clzc:zoneclustername:node:net> end clzc:zoneclustername:node> end Add authorization for the public-network addresses that the zone cluster is allowed to use clzc: zoneclustername> add net clzc: zoneclustername:net> set address=ipaddress1 clzc: zoneclustername:net> end Set the root password globally for all nodes in the zone cluster clzc:zoneclustername> add sysid clzc:zoneclustername:sysid> set root_password=encrypted_password clzc:zoneclustername:sysid> end Save the configuration and exit the utility clzc:zoneclustername> commit clzc:zoneclustername> exit |
(Optional) Add one or more additional nodes to the zone cluster,
phys-schost-1# clzonecluster configure zoneclustername clzc:zoneclustername> add node clzc:zoneclustername:node> set physical-host=baseclusternode2 clzc:zoneclustername:node> set hostname=hostname2 clzc:zoneclustername:node> add net clzc:zoneclustername:node:net> set address=public_netaddr clzc:zoneclustername:node:net> set physical=adapter clzc:zoneclustername:node:net> end clzc:zoneclustername:node> end clzc:zoneclustername> commit clzc:zoneclustername> exit |
Verify the zone cluster configuration.
The verify subcommand checks for the availability of the specified resources. If the clzonecluster verify command succeeds, there is no output.
phys-schost-1# clzonecluster verify zoneclustername phys-schost-1# clzonecluster status zoneclustername === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone HostName Status Zone Status ---- --------- ------------- ------ ----------- zone basenode1 zone-1 Offline Configured basenode2 zone-2 Offline Configured |
phys-schost-1# clzonecluster install zoneclustername Waiting for zone install commands to complete on all the nodes of the zone cluster "zoneclustername"... Installation of the zone cluster might take several minutes phys-schost-1# clzonecluster boot zoneclustername Waiting for zone boot commands to complete on all the nodes of the zone cluster "zoneclustername"... |
The following example shows the contents of a command file that can be used with the clzonecluster utility to create a zone cluster. The file contains the series of clzonecluster commands that you would input manually.
In the following configuration, the zone cluster sczone is created on the global-cluster node phys-schost-1. The zone cluster uses /zones/sczone as the zone path and public IP address 172.16.2.2. The first node of the zone cluster is assigned the hostname zc-host-1 and uses the network address 172.16.0.1 and the bge0 adapter. The second node of the zone cluster is created on the global-cluster node phys-schost-2. This second zone-cluster node is assigned the hostname zc-host-2 and uses the network address 172.16.0.2 and the bge1 adapter.
create set zonepath=/zones/sczone add net set address=172.16.2.2 end add node set physical-host=phys-schost-1 set hostname=zc-host-1 add net set address=172.16.0.1 set physical=bge0 end end add sysid set root_password=encrypted_password end add node set physical-host=phys-schost-2 set hostname=zc-host-2 add net set address=172.16.0.2 set physical=bge1 end end commit exit
The following example shows the commands to create the new zone cluster sczone on the global-cluster node phys-schost-1 by using the configuration file sczone-config. The hostnames of the zone-cluster nodes are zc-host-1 and zc-host-2.
phys-schost-1# clzonecluster configure -f sczone-config sczone phys-schost-1# clzonecluster verify sczone phys-schost-1# clzonecluster install sczone Waiting for zone install commands to complete on all the nodes of the zone cluster "sczone"... phys-schost-1# clzonecluster boot sczone Waiting for zone boot commands to complete on all the nodes of the zone cluster "sczone"... phys-schost-1# clzonecluster status sczone === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone HostName Status Zone Status ---- --------- ------------- ------ ----------- sczone phys-schost-1 zc-host-1 Offline Running phys-schost-2 zc-host-2 Offline Running |
To add the use of a file system to the zone cluster, go to Adding File Systems to a Zone Cluster.
To add the use of global storage devices to the zone cluster, go to Adding Storage Devices to a Zone Cluster.
This section provides procedures to add file systems for use by the zone cluster.
After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.
You cannot use the clzonecluster command to add a local file system, which is mounted on a single global-cluster node, to a zone cluster. Instead, use the zonecfg command as you normally would in a stand-alone system. The local file system would not be under cluster control.
You cannot add a cluster file system to a zone cluster.
The following procedures are in this section:
In addition, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Perform this procedure to add a local file system on the global cluster for use by the zone cluster.
To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster.
Alternatively, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Become superuser on a node of the global cluster that hosts the zone cluster.
Perform all steps of the procedure from a node of the global cluster.
On the global cluster, create a file system that you want to use in the zone cluster.
Ensure that the file system is created on shared disks.
On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
phys-schost# vi /etc/vfstab |
Add the file system to the zone-cluster configuration.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=mountpoint clzc:zoneclustername:fs> set special=disk-device-name clzc:zoneclustername:fs> set raw=raw-disk-device-name clzc:zoneclustername:fs> set type=FS-type clzc:zoneclustername:fs> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit |
Specifies the file system mount point
Specifies the name of the disk device
Specifies the name of the raw disk device
Specifies the type of file system
Verify the addition of the file system.
phys-schost# clzonecluster show -v zoneclustername |
This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 5 no logging phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/global/oracle/d1 clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1 clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1 clzc:sczone:fs> set type=ufs clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /global/oracle/d1 special: /dev/md/oracle/dsk/d1 raw: /dev/md/oracle/rdsk/d1 type: ufs options: [] … |
Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Perform this procedure to add a ZFS storage pool for use by a zone cluster.
To configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Become superuser on a node of the global cluster that hosts the zone cluster.
Perform all steps of this procedure from a node of the global zone.
Create the ZFS storage pool on the global cluster.
Ensure that the pool is connected on shared disks that are connected to all nodes of the zone cluster.
See Solaris ZFS Administration Guide for procedures to create a ZFS pool.
Add the pool to the zone-cluster configuration.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add dataset clzc:zoneclustername:dataset> set name=ZFSpoolname clzc:zoneclustername:dataset> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit |
Verify the addition of the file system.
phys-schost# clzonecluster show -v zoneclustername |
The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add dataset clzc:sczone:dataset> set name=zpool1 clzc:sczone:dataset> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: dataset name: zpool1 … |
Configure the ZFS storage pool to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems that are in the pool on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Perform this procedure to add a Sun QFS shared file system for use by a zone cluster.
At this time, QFS shared file systems are only supported for use in clusters that are configured with Oracle Real Application Clusters (RAC). On clusters that are not configured with Oracle RAC, you can use a single-machine QFS file system that is configured as a highly available local file system.
Become superuser on a voting node of the global cluster that hosts the zone cluster.
Perform all steps of this procedure from a voting node of the global cluster.
On the global cluster, configure the QFS shared file system that you want to use in the zone cluster.
Follow procedures for shared file systems in Configuring Sun QFS File Systems With Sun Cluster.
On each node of the global cluster that hosts a zone-cluster node, add an entry to the /etc/vfstab file for the file system that you want to mount on the zone cluster.
phys-schost# vi /etc/vfstab |
Add the file system to the zone cluster configuration.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=mountpoint clzc:zoneclustername:fs> set special=QFSfilesystemname clzc:zoneclustername:fs> set type=samfs clzc:zoneclustername:fs> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit |
Verify the addition of the file system.
phys-schost# clzonecluster show -v zoneclustername |
The following example shows the QFS shared file system Data-cz1 added to the zone cluster sczone. From the global cluster, the mount point of the file system is /zones/sczone/root/db_qfs/Data1, where /zones/sczone/root/ is the zone's root path. From the zone-cluster node, the mount point of the file system is /db_qfs/Data1.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # Data-cz1 - /zones/sczone/root/db_qfs/Data1 samfs - no shared,notrace phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/db_qfs/Data1 clzc:sczone:fs> set special=Data-cz1 clzc:sczone:fs> set type=samfs clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /db_qfs/Data1 special: Data-cz1 raw: type: samfs options: [] … |
This section describes how to add the direct use of global storage devices by a zone cluster. Global devices are devices that can be accessed by more than one node in the cluster, either one node at a time or multiple nodes concurrently.
After a device is added to a zone cluster, the device is visible only from within that zone cluster.
This section contains the following procedures:
How to Add an Individual Metadevice to a Zone Cluster (Solaris Volume Manager)
How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager)
Perform this procedure to add an individual metadevice of a Solaris Volume Manager disk set to a zone cluster.
Become superuser on a node of the global cluster that hosts the zone cluster.
You perform all steps of this procedure from a node of the global cluster.
Identify the disk set that contains the metadevice to add to the zone cluster and determine whether it is online.
phys-schost# cldevicegroup status |
If the disk set that you are adding is not online, bring it online.
phys-schost# cldevicegroup online diskset |
Determine the set number that corresponds to the disk set to add.
phys-schost# ls -l /dev/md/diskset lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/diskset -> shared/setnumber |
Add the metadevice for use by the zone cluster.
You must use a separate add device session for each set match= entry.
An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add device clzc:zoneclustername:device> set match=/dev/md/diskset/*dsk/metadevice clzc:zoneclustername:device> end clzc:zoneclustername> add device clzc:zoneclustername:device> set match=/dev/md/shared/setnumber/*dsk/metadevice clzc:zoneclustername:device> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit |
Specifies the full logical device path of the metadevice
Specifies the full physical device path of the disk set number
Reboot the zone cluster.
The change becomes effective after the zone cluster reboots.
phys-schost# clzonecluster reboot zoneclustername |
The following example adds the metadevice d1 in the disk set oraset to the sczone zone cluster. The set number of the disk set is 3.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add device clzc:sczone:device> set match=/dev/md/oraset/*dsk/d1 clzc:sczone:device> end clzc:sczone> add device clzc:sczone:device> set match=/dev/md/shared/3/*dsk/d1 clzc:sczone:device> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster reboot sczone |
Perform this procedure to add an entire Solaris Volume Manager disk set to a zone cluster.
Become superuser on a node of the global cluster that hosts the zone cluster.
You perform all steps of this procedure from a node of the global cluster.
Identify the disk set to add to the zone cluster and determine whether it is online.
phys-schost# cldevicegroup status |
If the disk set that you are adding is not online, bring it online.
phys-schost# cldevicegroup online diskset |
Determine the set number that corresponds to the disk set to add.
phys-schost# ls -l /dev/md/diskset lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/diskset -> shared/setnumber |
Add the disk set for use by the zone cluster.
You must use a separate add device session for each set match= entry.
An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add device clzc:zoneclustername:device> set match=/dev/md/diskset/*dsk/* clzc:zoneclustername:device> end clzc:zoneclustername> add device clzc:zoneclustername:device> set match=/dev/md/shared/setnumber/*dsk/* clzc:zoneclustername:device> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit |
Specifies the full logical device path of the disk set
Specifies the full physical device path of the disk set number
Reboot the zone cluster.
The change becomes effective after the zone cluster reboots.
phys-schost# clzonecluster reboot zoneclustername |
The following example adds the disk set oraset to the sczone zone cluster. The set number of the disk set is 3.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add device clzc:sczone:device> set match=/dev/md/oraset/*dsk/* clzc:sczone:device> end clzc:sczone> add device clzc:sczone:device> set match=/dev/md/shared/3/*dsk/* clzc:sczone:device> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster reboot sczone |
Perform this procedure to add a DID device to a zone cluster.
Become superuser on a node of the global cluster that hosts the zone cluster.
You perform all steps of this procedure from a node of the global cluster.
Identify the DID device to add to the zone cluster.
The device you add must be connected to all nodes of the zone cluster.
phys-schost# cldevice list -v |
Add the DID device for use by the zone cluster.
An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add device clzc:zoneclustername:device> set match=/dev/did/*dsk/dNs* clzc:zoneclustername:device> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit |
Specifies the full device path of the DID device
Reboot the zone cluster.
The change becomes effective after the zone cluster reboots.
phys-schost# clzonecluster reboot zoneclustername |
The following example adds the DID device d10 to the sczone zone cluster.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add device clzc:sczone:device> set match=/dev/did/*dsk/d10s* clzc:sczone:device> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster reboot sczone |
Use the zonecfg command to export raw-disk devices (cNtXdYsZ) to a zone-cluster node, as you normally would for other brands of non-global zones.
Such devices would not be under the control of the clzonecluster command, but would be treated as local devices of the node. See How to Import Raw and Block Devices by Using zonecfg in System Administration Guide: Solaris Containers-Resource Management and Solaris Zonesfor more information about exporting raw-disk devices to a non-global zone.