1. Planning the Oracle Solaris Cluster Configuration
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Installing and Configuring Veritas Volume Manager
6. Creating a Cluster File System
7. Creating Non-Global Zones and Zone Clusters
Configuring a Non-Global Zone on a Global-Cluster Node
How to Create a Non-Global Zone on a Global-Cluster Node
Overview of the clzonecluster Utility
How to Prepare for Trusted Extensions Use With Zone Clusters
Adding File Systems to a Zone Cluster
How to Add a Local File System to a Zone Cluster
How to Add a ZFS Storage Pool to a Zone Cluster
How to Add a QFS Shared File System to a Zone Cluster
How to Add a Cluster File System to a Zone Cluster
Adding Storage Devices to a Zone Cluster
How to Add an Individual Metadevice to a Zone Cluster (Solaris Volume Manager)
How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager)
8. Installing the Oracle Solaris Cluster Module to Sun Management Center
9. Uninstalling Software From the Cluster
A. Oracle Solaris Cluster Installation and Configuration Worksheets
This section provide procedures to configure a cluster of Solaris Containers non-global zones, called a zone cluster.
The clzonecluster utility creates, modifies, and removes a zone cluster. The clzonecluster utility actively manages a zone cluster. For example, the clzonecluster utility both boots and halts a zone cluster. Progress messages for the clzonecluster utility are output to the console, but are not saved in a log file.
The utility operates in the following levels of scope, similar to the zonecfg utility:
The cluster scope affects the entire zone cluster.
The node scope affects only the one zone-cluster node that is specified.
The resource scope affects either a specific node or the entire zone cluster, depending on which scope you enter the resource scope from. Most resources can only be entered from the node scope. The scope is identified by the following prompts:
clzc:zoneclustername:resource> cluster-wide setting clzc:zoneclustername:node:resource> node-specific setting
You can specify any Solaris zones resource parameter, as well as parameters that are specific to zone clusters, by using the clzonecluster utility. For information about parameters that you can set in a zone cluster, see the clzonecluster(1CL)man page. Additional information about Solaris zones resource parameters is in the zonecfg(1M) man page.
This section describes how to configure a cluster of non-global zones.
This procedure prepares the global cluster to use the Trusted Extensions feature of Oracle Solaris with zone clusters and enables the Trusted Extensions feature.
If you do not plan to enable Trusted Extensions, proceed to How to Create a Zone Cluster.
Perform this procedure on each node in the global cluster.
Before You Begin
Perform the following tasks:
Ensure that the Solaris OS is installed to support Oracle Solaris Cluster and Trusted Extensions software.
If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. Trusted Extensions software is not included in the Solaris End User software group.
See How to Install Solaris Software for more information about installing Solaris software to meet Oracle Solaris Cluster software requirements.
Ensure that an LDAP naming service is configured for use by Trusted Extensions. See Chapter 5, Configuring LDAP for Trusted Extensions (Tasks), in Oracle Solaris Trusted Extensions Configuration Guide
Review guidelines for Trusted Extensions in a zone cluster. See Guidelines for Trusted Extensions in a Zone Cluster.
The Trusted Extensions zoneshare and zoneunshare scripts support the ability to export home directories on the system. An Oracle Solaris Cluster configuration does not support this feature.
Disable this feature by replacing each script with a symbolic link to the /bin/true utility. Do this on each global-cluster node.
phys-schost# ln -s /usr/lib/zones/zoneshare /bin/true phys-schost# ln -x /usr/lib/zones/zoneunshare /bin/true
See Run the txzonemgr Script in Oracle Solaris Trusted Extensions Configuration Guide.
ipaddress:admin_low
Delete the -failover option from any entry that contains that option.
Use the Security Templates wizard in Solaris Management Console as described in How to Construct a Remote Host Template in Oracle Solaris Trusted Extensions Administrator’s Procedures to perform the following tasks.
Create a new entry for IP addresses used by cluster components and assign each entry a CIPSO template.
Add entries for each of the following IP addresses that exist in the global-cluster node's /etc/inet/hosts file:
Each global-cluster node private IP address
All cl_privnet IP addresses in the global cluster
Each logical-hostname public IP address for the global cluster
Each shared-address public IP address for the global cluster
Entries would look similar to the following.
127.0.0.1:cipso 172.16.4.1:cipso 172.16.4.2:cipso …
Add an entry to make the default template internal.
0.0.0.0:internal
For more information about CIPSO templates, see Configure the Domain of Interpretation in Oracle Solaris Trusted Extensions Configuration Guide.
phys-schost# svcadm enable -s svc:/system/labeld:default phys-schost# shutdown -g0 -y -i6
For more information, see Enable Trusted Extensions in Oracle Solaris Trusted Extensions Configuration Guide.
phys-schost# svcs labeld STATE STIME FMRI online 17:52:55 svc:/system/labeld:default
When the SMF service is enabled on all global-cluster nodes, perform the remaining steps of this procedure on each node of the global cluster.
The LDAP server is used by the global zone and by the nodes of the zone cluster.
phys-schost# svcadm enable rlogin
Modify the account management entries by appending a Tab and typing allow_remote or allow_unlabeled respectively, as shown below.
other account requisite pam_roles.so.1 Tab allow_remote other account required pam_unix_account.so.1 Tab allow_unlabeled
Ensure that the passwd and group lookup entries have files first in the lookup order.
… passwd: files ldap group: files ldap …
Ensure that the hosts and netmasks lookup entries have cluster listed first in the lookup order.
… hosts: cluster files ldap … netmasks: cluster files ldap …
Use the Add User wizard in Solaris Management Console as described in Creating Roles and Users in Trusted Extensions in Solaris Trusted Extensions Installation and Configuration for Solaris 10 11/06 and Solaris 10 8/07 Releases.
Next Steps
Create the zone cluster. Go to How to Create a Zone Cluster.
Perform this procedure to create a cluster of non-global zones.
Before You Begin
Create a global cluster. See Chapter 3, Establishing the Global Cluster.
Read the guidelines and requirements for creating a zone cluster. See Zone Clusters.
If the zone cluster will use Trusted Extensions, ensure that you have configured and enabled Trusted Extensions as described in How to Prepare for Trusted Extensions Use With Zone Clusters.
Have available the following information:
The unique name to assign to the zone cluster.
Note - To configure a zone cluster when Trusted Extensions is enabled, you must use the name of the Trusted Extensions security label that the zone cluster will use as the name of the zone cluster itself. Create a separate zone cluster for each Trusted Extensions security label that you want to use.
The zone path that the nodes of the zone cluster will use. For more information, see the description of the zonepath property in Resource and Property Types in System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.
The name of each node in the global cluster on which to create a zone-cluster node.
The zone public hostname, or host alias, that you assign to each zone-cluster node.
The public-network IP address that each zone-cluster node uses.
The name of the public-network adapter that each zone-cluster node uses to connect to the public network.
Note - Perform all steps of this procedure from a node of the global cluster.
If any node is in noncluster mode, changes that you make are propagated when the node returns to cluster mode. Therefore, you can create a zone cluster even if some global-cluster nodes are in noncluster mode. When those nodes return to cluster mode, the system performs zone-cluster creation tasks on those nodes.
phys-schost# clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-2 Online phys-schost-1 Online
Observe the following special instructions:
If Trusted Extensions is enabled, zoneclustername must be the same name as a Trusted Extensions security label that has the security levels that you want to assign to the zone cluster. These security labels are configured in the /etc/security/tsol/tnrhtp files on the global cluster.
By default, sparse root zones are created. To create whole root zones, add the -b option to the create command.
phys-schost-1# clzonecluster configure zoneclustername clzc:zoneclustername> create Set the zone path for the entire zone cluster clzc:zoneclustername> set zonepath=/zones/zoneclustername Add the first node and specify node-specific settings clzc:zoneclustername> add node clzc:zoneclustername:node> set physical-host=baseclusternode1 clzc:zoneclustername:node> set hostname=hostname1 clzc:zoneclustername:node> add net clzc:zoneclustername:node:net> set address=public_netaddr clzc:zoneclustername:node:net> set physical=adapter clzc:zoneclustername:node:net> end clzc:zoneclustername:node> end Add authorization for the public-network addresses that the zone cluster is allowed to use clzc: zoneclustername> add net clzc: zoneclustername:net> set address=ipaddress1 clzc: zoneclustername:net> end Set the root password globally for all nodes in the zone cluster clzc:zoneclustername> add sysid clzc:zoneclustername:sysid> set root_password=encrypted_password clzc:zoneclustername:sysid> end Save the configuration and exit the utility clzc:zoneclustername> commit clzc:zoneclustername> exit
phys-schost-1# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=/var/tsol/doors clzc:zoneclustername:fs> set special=/var/tsol/doors clzc:zoneclustername:fs> set type=lofs clzc:zoneclustername:fs> add options ro clzc:zoneclustername:fs> end clzc:zoneclustername> commit clzc:zoneclustername> exit
phys-schost-1# clzonecluster configure zoneclustername clzc:zoneclustername> add node clzc:zoneclustername:node> set physical-host=baseclusternode2 clzc:zoneclustername:node> set hostname=hostname2 clzc:zoneclustername:node> add net clzc:zoneclustername:node:net> set address=public_netaddr clzc:zoneclustername:node:net> set physical=adapter clzc:zoneclustername:node:net> end clzc:zoneclustername:node> end clzc:zoneclustername> commit clzc:zoneclustername> exit
phys-schost-1# clzonecluster configure zoneclustername clzc:zoneclustername> add sysid clzc:zoneclustername:sysid> set name_service=LDAP clzc:zoneclustername:sysid> set domain_name=domainorg.domainsuffix clzc:zoneclustername:sysid> set proxy_dn="cn=proxyagent,ou=profile,dc=domainorg,dc=domainsuffix" clzc:zoneclustername:sysid> set proxy_password="proxypassword" clzc:zoneclustername:sysid> set profile=ldap-server clzc:zoneclustername:sysid> set profile_server=txldapserver_ipaddress clzc:zoneclustername:sysid> end clzc:zoneclustername> commit clzc:zoneclustername> exit
The verify subcommand checks for the availability of the specified resources. If the clzonecluster verify command succeeds, there is no output.
phys-schost-1# clzonecluster verify zoneclustername phys-schost-1# clzonecluster status zoneclustername === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone HostName Status Zone Status ---- --------- ------------- ------ ----------- zone basenode1 zone-1 Offline Configured basenode2 zone-2 Offline Configured
phys-schost-1# clzonecluster install zoneclustername Waiting for zone install commands to complete on all the nodes of the zone cluster "zoneclustername"...
Installation of the zone cluster might take several minutes phys-schost-1# clzonecluster boot zoneclustername Waiting for zone boot commands to complete on all the nodes of the zone cluster "zoneclustername"...
Perform this step on each node of the zone cluster.
phys-schost# cat /etc/cluster/nodeid N
Ensure that the SMF service has been imported and all services are up before you log in.
The cluster software automatically assigns these IP addresses when the cluster software configures a zone cluster.
In the ifconfig -a output, locate the clprivnet0 logical interface that belongs to the zone cluster. The value for inetis the IP address that was assigned to support the use of the cluster private interconnect by this zone cluster.
zc1# ifconfig -a lo0:3: flags=20010008c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone zc1 inet 127.0.0.1 netmask ff000000 bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 10.11.166.105 netmask ffffff00 broadcast 10.11.166.255 groupname sc_ipmp0 ether 0:3:ba:19:fa:b7 ce0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4 inet 10.11.166.109 netmask ffffff00 broadcast 10.11.166.255 groupname sc_ipmp0 ether 0:14:4f:24:74:d8 ce0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 zone zc1 inet 10.11.166.160 netmask ffffff00 broadcast 10.11.166.255 clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7 inet 172.16.0.18 netmask fffffff8 broadcast 172.16.0.23 ether 0:0:0:0:0:2 clprivnet0:3: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7 zone zc1 inet 172.16.0.22 netmask fffffffc broadcast 172.16.0.23
The hostname for the private interconnect, which is clusternodeN-priv, where N is the global-cluster node ID
172.16.0.22 clusternodeN-priv
Each net resource that was specified to the clzonecluster command when you created the zone cluster
Use the Security Templates wizard in Solaris Management Console as described in How to Construct a Remote Host Template in Oracle Solaris Trusted Extensions Administrator’s Procedures to perform the following tasks.
Create a new entry for IP addresses used by zone-cluster components and assign each entry a CIPSO template.
Add entries for each of the following IP addresses that exist in the zone-cluster node's /etc/inet/hosts file:
Each zone-cluster node private IP address
All cl_privnet IP addresses in the zone cluster
Each logical-hostname public IP address for the zone cluster
Each shared-address public IP address for the zone cluster
Entries would look similar to the following.
127.0.0.1:cipso 172.16.4.1:cipso 172.16.4.2:cipso …
Add an entry to make the default template internal.
0.0.0.0:internal
For more information about CIPSO templates, see Configure the Domain of Interpretation in Oracle Solaris Trusted Extensions Configuration Guide.
phys-schost# init -g0 -y -i6
Perform the following commands on each node of the zone cluster.
phys-schost# zlogin zcnode zcnode# svcadm enable svc:/network/dns/client:default zcnode# svcadm enable svc:/network/login:rlogin zcnode# reboot
Example 7-2 Configuration File to Create a Zone Cluster
The following example shows the contents of a command file that can be used with the clzonecluster utility to create a zone cluster. The file contains the series of clzonecluster commands that you would input manually.
In the following configuration, the zone cluster sczone is created on the global-cluster node phys-schost-1. The zone cluster uses /zones/sczone as the zone path and public IP address 172.16.2.2. The first node of the zone cluster is assigned the hostname zc-host-1 and uses the network address 172.16.0.1 and the bge0 adapter. The second node of the zone cluster is created on the global-cluster node phys-schost-2. This second zone-cluster node is assigned the hostname zc-host-2 and uses the network address 172.16.0.2 and the bge1 adapter.
create set zonepath=/zones/sczone add net set address=172.16.2.2 end add node set physical-host=phys-schost-1 set hostname=zc-host-1 add net set address=172.16.0.1 set physical=bge0 end end add sysid set root_password=encrypted_password end add node set physical-host=phys-schost-2 set hostname=zc-host-2 add net set address=172.16.0.2 set physical=bge1 end end commit exit
Example 7-3 Creating a Zone Cluster by Using a Configuration File.
The following example shows the commands to create the new zone cluster sczone on the global-cluster node phys-schost-1 by using the configuration file sczone-config. The hostnames of the zone-cluster nodes are zc-host-1 and zc-host-2.
phys-schost-1# clzonecluster configure -f sczone-config sczone phys-schost-1# clzonecluster verify sczone phys-schost-1# clzonecluster install sczone Waiting for zone install commands to complete on all the nodes of the zone cluster "sczone"... phys-schost-1# clzonecluster boot sczone Waiting for zone boot commands to complete on all the nodes of the zone cluster "sczone"... phys-schost-1# clzonecluster status sczone === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone HostName Status Zone Status ---- --------- ------------- ------ ----------- sczone phys-schost-1 zc-host-1 Offline Running phys-schost-2 zc-host-2 Offline Running
Next Steps
To add the use of a file system to the zone cluster, go to Adding File Systems to a Zone Cluster.
To add the use of global storage devices to the zone cluster, go to Adding Storage Devices to a Zone Cluster.
This section provides procedures to add file systems for use by the zone cluster.
After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.
Note - You cannot use the clzonecluster command to add a local file system, which is mounted on a single global-cluster node, to a zone cluster. Instead, use the zonecfg command as you normally would in a stand-alone system. The local file system would not be under cluster control.
The following procedures are in this section:
In addition, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Perform this procedure to add a local file system on the global cluster for use by the zone cluster.
Note - To add a ZFS pool to a zone cluster, instead perform procedures in How to Add a ZFS Storage Pool to a Zone Cluster.
Alternatively, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Note - Perform all steps of the procedure from a node of the global cluster.
Ensure that the file system is created on shared disks.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=mountpoint clzc:zoneclustername:fs> set special=disk-device-name clzc:zoneclustername:fs> set raw=raw-disk-device-name clzc:zoneclustername:fs> set type=FS-type clzc:zoneclustername:fs> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
Specifies the file system mount point
Specifies the name of the disk device
Specifies the name of the raw disk device
Specifies the type of file system
Note - Enable logging for UFS and VxFS file systems.
phys-schost# clzonecluster show -v zoneclustername
Example 7-4 Adding a Local File System to a Zone Cluster
This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/global/oracle/d1 clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1 clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1 clzc:sczone:fs> set type=ufs clzc:sczone:fs> add options [logging] clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /global/oracle/d1 special: /dev/md/oracle/dsk/d1 raw: /dev/md/oracle/rdsk/d1 type: ufs options: [logging] cluster-control: [true] …
Next Steps
Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Perform this procedure to add a ZFS storage pool for use by a zone cluster.
Note - To configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Note - Perform all steps of this procedure from a node of the global zone.
Note - Ensure that the pool is connected on shared disks that are connected to all nodes of the zone cluster.
See Oracle Solaris ZFS Administration Guide for procedures to create a ZFS pool.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add dataset clzc:zoneclustername:dataset> set name=ZFSpoolname clzc:zoneclustername:dataset> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
phys-schost# clzonecluster show -v zoneclustername
Example 7-5 Adding a ZFS Storage Pool to a Zone Cluster
The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add dataset clzc:sczone:dataset> set name=zpool1 clzc:sczone:dataset> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: dataset name: zpool1 …
Next Steps
Configure the ZFS storage pool to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems that are in the pool on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Perform this procedure to add a Sun QFS shared file system for use by a zone cluster.
Note - At this time, QFS shared file systems are only supported for use in clusters that are configured with Oracle Real Application Clusters (RAC). On clusters that are not configured with Oracle RAC, you can use a single-machine QFS file system that is configured as a highly available local file system.
Note - Perform all steps of this procedure from a voting node of the global cluster.
Follow procedures for shared file systems in Configuring Sun QFS File Systems With Sun Cluster.
phys-schost# vi /etc/vfstab
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=mountpoint clzc:zoneclustername:fs> set special=QFSfilesystemname clzc:zoneclustername:fs> set type=samfs clzc:zoneclustername:fs> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
Go to Step 7.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=lofs-mountpoint clzc:zoneclustername:fs> set special=QFS-mountpoint clzc:zoneclustername:fs> set type=lofs clzc:zoneclustername:fs> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
phys-schost# clzonecluster show -v zoneclustername
Example 7-6 Adding a QFS Shared File System as a Direct Mount to a Zone Cluster
The following example shows the QFS shared file system Data-cz1 added to the zone cluster sczone. From the global cluster, the mount point of the file system is /zones/sczone/root/db_qfs/Data1, where /zones/sczone/root/ is the zone's root path. From the zone-cluster node, the mount point of the file system is /db_qfs/Data1.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # Data-cz1 - /zones/sczone/root/db_qfs/Data1 samfs - no shared,notrace phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/db_qfs/Data1 clzc:sczone:fs> set special=Data-cz1 clzc:sczone:fs> set type=samfs clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /db_qfs/Data1 special: Data-cz1 raw: type: samfs options: [] …
Example 7-7 Adding a QFS Shared File System as a Loopback File System to a Zone Cluster
The following example shows the QFS shared file system with mountpoint/db_qfs/Data1 added to the zone cluster sczone. The file system is available to a zone cluster using the loopback mount mechanism at the mountpoint/db_qfs/Data-cz1.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # Data-cz1 - /db_qfs/Data1 samfs - no shared,notrace phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/db_qfs/Data-cz1 clzc:sczone:fs> set special=/db_qfs/Data clzc:sczone:fs> set type=lofs clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /db_qfs/Data1 special: Data-cz1 raw: type: lofs options: [] cluster-control: [true] …
Perform this procedure to add a cluster file system for use by a zone cluster.
Note - Perform all steps of this procedure from a voting node of the global cluster.
phys-schost# vi /etc/vfstab … /dev/global/dsk/d12s0 /dev/global/rdsk/d12s0/ /global/fs ufs 2 no global, logging
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=zonecluster-lofs-mountpoint clzc:zoneclustername:fs> set special=globalcluster-mountpoint clzc:zoneclustername:fs> set type=lofs clzc:zoneclustername:fs> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
Specifies the file system mount point for LOFS to make the cluster file system available to the zone cluster.
Specifies the file system mount point of the original cluster file system in the global cluster.
For more information about creating loopback file systems, see How to Create and Mount an LOFS File System in System Administration Guide: Devices and File Systems.
phys-schost# clzonecluster show -v zoneclustername
Example 7-8 Adding a Cluster File System to a Zone Cluster
The following example shows how to add a cluster file system with mount point /global/apache to a zone cluster. The file system is available to a zone cluster using the loopback mount mechanism at the mount point /zone/apache.
phys-schost-1# vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/apache ufs 2 yes global, logging phys-schost-1# clzonecluster configure zoneclustername clzc:zoneclustername> add fs clzc:zoneclustername:fs> set dir=/zone/apache clzc:zoneclustername:fs> set special=/global/apache clzc:zoneclustername:fs> set type=lofs clzc:zoneclustername:fs> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /zone/apache special: /global/apache raw: type: lofs options: [] cluster-control: true …
Next Steps
Configure the cluster file system to be available in the zone cluster by using an HAStoragePlus resource. The HAStoragePlus resource manages by mounting the file system in the global cluster, and later performing a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
This section describes how to add the direct use of global storage devices by a zone cluster. Global devices are devices that can be accessed by more than one node in the cluster, either one node at a time or multiple nodes concurrently.
After a device is added to a zone cluster, the device is visible only from within that zone cluster.
This section contains the following procedures:
How to Add an Individual Metadevice to a Zone Cluster (Solaris Volume Manager)
How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager)
Perform this procedure to add an individual metadevice of a Solaris Volume Manager disk set to a zone cluster.
You perform all steps of this procedure from a node of the global cluster.
phys-schost# cldevicegroup status
phys-schost# cldevicegroup online diskset
phys-schost# ls -l /dev/md/diskset lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/diskset -> shared/setnumber
You must use a separate add device session for each set match= entry.
Note - An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add device clzc:zoneclustername:device> set match=/dev/md/diskset/*dsk/metadevice clzc:zoneclustername:device> end clzc:zoneclustername> add device clzc:zoneclustername:device> set match=/dev/md/shared/setnumber/*dsk/metadevice clzc:zoneclustername:device> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
Specifies the full logical device path of the metadevice
Specifies the full physical device path of the disk set number
The change becomes effective after the zone cluster reboots.
phys-schost# clzonecluster reboot zoneclustername
Example 7-9 Adding a Metadevice to a Zone Cluster
The following example adds the metadevice d1 in the disk set oraset to the sczone zone cluster. The set number of the disk set is 3.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add device clzc:sczone:device> set match=/dev/md/oraset/*dsk/d1 clzc:sczone:device> end clzc:sczone> add device clzc:sczone:device> set match=/dev/md/shared/3/*dsk/d1 clzc:sczone:device> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster reboot sczone
Perform this procedure to add an entire Solaris Volume Manager disk set to a zone cluster.
You perform all steps of this procedure from a node of the global cluster.
phys-schost# cldevicegroup status
phys-schost# cldevicegroup online diskset
phys-schost# ls -l /dev/md/diskset lrwxrwxrwx 1 root root 8 Jul 22 23:11 /dev/md/diskset -> shared/setnumber
You must use a separate add device session for each set match= entry.
Note - An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add device clzc:zoneclustername:device> set match=/dev/md/diskset/*dsk/* clzc:zoneclustername:device> end clzc:zoneclustername> add device clzc:zoneclustername:device> set match=/dev/md/shared/setnumber/*dsk/* clzc:zoneclustername:device> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
Specifies the full logical device path of the disk set
Specifies the full physical device path of the disk set number
The change becomes effective after the zone cluster reboots.
phys-schost# clzonecluster reboot zoneclustername
Example 7-10 Adding a Disk Set to a Zone Cluster
The following example adds the disk set oraset to the sczone zone cluster. The set number of the disk set is 3.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add device clzc:sczone:device> set match=/dev/md/oraset/*dsk/* clzc:sczone:device> end clzc:sczone> add device clzc:sczone:device> set match=/dev/md/shared/3/*dsk/* clzc:sczone:device> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster reboot sczone
Perform this procedure to add a DID device to a zone cluster.
You perform all steps of this procedure from a node of the global cluster.
The device you add must be connected to all nodes of the zone cluster.
phys-schost# cldevice list -v
Note - An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> add device clzc:zoneclustername:device> set match=/dev/did/*dsk/dNs* clzc:zoneclustername:device> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
Specifies the full device path of the DID device
The change becomes effective after the zone cluster reboots.
phys-schost# clzonecluster reboot zoneclustername
Example 7-11 Adding a DID Device to a Zone Cluster
The following example adds the DID device d10 to the sczone zone cluster.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add device clzc:sczone:device> set match=/dev/did/*dsk/d10s* clzc:sczone:device> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster reboot sczone
Such devices would not be under the control of the clzonecluster command, but would be treated as local devices of the node. See How to Import Raw and Block Devices by Using zonecfg in System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zonesfor more information about exporting raw-disk devices to a non-global zone.