Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster 3.3 3/13 |
1. Planning the Oracle Solaris Cluster Configuration
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Creating a Cluster File System
6. Creating Non-Global Zones and Zone Clusters
Configuring a Non-Global Zone on a Global-Cluster Node
How to Create a Non-Global Zone on a Global-Cluster Node
Overview of the clzonecluster Utility
How to Prepare for Trusted Extensions Use With Zone Clusters
Adding File Systems to a Zone Cluster
How to Add a Highly Available Local File System to a Zone Cluster
How to Add a ZFS Storage Pool to a Zone Cluster
How to Add a Cluster File System to a Zone Cluster
Adding Local File Systems to a Specific Zone-Cluster Node
How to Add a Local File System to a Specific Zone-Cluster Node
How to Add a Local ZFS Storage Pool to a Specific Zone-Cluster Node
Adding Storage Devices to a Zone Cluster
How to Add a Global Storage Device to a Zone Cluster
How to Add a Raw-Disk Device to a Specific Zone--Cluster Node
This section provide procedures to configure a cluster of Oracle Solaris Containers non-global zones, called a zone cluster.
The clzonecluster utility creates, modifies, and removes a zone cluster. The clzonecluster utility actively manages a zone cluster. For example, the clzonecluster utility both boots and halts a zone cluster. Progress messages for the clzonecluster utility are output to the console, but are not saved in a log file.
The utility operates in the following levels of scope, similar to the zonecfg utility:
The cluster scope affects the entire zone cluster.
The node scope affects only the one zone-cluster node that is specified.
The resource scope affects either a specific node or the entire zone cluster, depending on which scope you enter the resource scope from. Most resources can only be entered from the node scope. The scope is identified by the following prompts:
clzc:zoneclustername:resource> cluster-wide setting clzc:zoneclustername:node:resource> node-specific setting
You can specify any Oracle Solaris zones resource parameter, as well as parameters that are specific to zone clusters, by using the clzonecluster utility. For information about parameters that you can set in a zone cluster, see the clzonecluster(1CL)man page. Additional information about Oracle Solaris zones resource parameters is in the zonecfg(1M) man page.
This section describes how to configure a cluster of non-global zones.
This procedure prepares the global cluster to use the Trusted Extensions feature of Oracle Solaris software with zone clusters and enables the Trusted Extensions feature.
If you do not plan to enable Trusted Extensions, proceed to How to Create a Zone Cluster.
Perform this procedure on each node in the global cluster.
Before You Begin
Perform the following tasks:
Ensure that the Oracle Solaris OS is installed to support Oracle Solaris Cluster and Trusted Extensions software.
If Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. Trusted Extensions software is not included in the Oracle Solaris End User software group.
See How to Install Oracle Solaris Software for more information about installing Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
Ensure that an LDAP naming service is configured for use by Trusted Extensions. See Chapter 5, Configuring LDAP for Trusted Extensions (Tasks), in Trusted Extensions Configuration Guide.
Review guidelines for Trusted Extensions in a zone cluster. See Guidelines for Trusted Extensions in a Zone Cluster.
The Trusted Extensions zoneshare and zoneunshare scripts support the ability to export home directories on the system. An Oracle Solaris Cluster configuration does not support this feature.
Disable this feature by replacing each script with a symbolic link to the /bin/true utility. Do this on each global-cluster node.
phys-schost# ln -s /usr/lib/zones/zoneshare /bin/true phys-schost# ln -s /usr/lib/zones/zoneunshare /bin/true
See Run the txzonemgr Script in Trusted Extensions Configuration Guide.
ipaddress:admin_low
Delete the -failover option from any entry that contains that option.
Use the Security Templates wizard in Solaris Management Console as described in How to Construct a Remote Host Template in Trusted Extensions Administrator’s Procedures to perform the following tasks.
Create a new entry for IP addresses used by cluster components and assign each entry a CIPSO template.
Add entries for each of the following IP addresses that exist in the global-cluster node's /etc/inet/hosts file:
Each global-cluster node private IP address
All cl_privnet IP addresses in the global cluster
Each logical-hostname public IP address for the global cluster
Each shared-address public IP address for the global cluster
Entries would look similar to the following.
127.0.0.1:cipso 172.16.4.1:cipso 172.16.4.2:cipso …
Add an entry to make the default template internal.
0.0.0.0:internal
For more information about CIPSO templates, see Configure the Domain of Interpretation in Trusted Extensions Configuration Guide.
phys-schost# svcadm enable -s svc:/system/labeld:default phys-schost# shutdown -g0 -y -i6
For more information, see Enable Trusted Extensions in Trusted Extensions Configuration Guide.
phys-schost# svcs labeld STATE STIME FMRI online 17:52:55 svc:/system/labeld:default
When all steps are completed on all global-cluster nodes, perform the remaining steps of this procedure on each node of the global cluster.
The LDAP server is used by the global zone and by the nodes of the zone cluster.
phys-schost# svcadm enable rlogin
Modify the account management entries by appending a Tab and typing allow_remote or allow_unlabeled respectively, as shown below.
other account requisite pam_roles.so.1 Tab allow_remote other account required pam_unix_account.so.1 Tab allow_unlabeled
Ensure that the passwd and group lookup entries have files first in the lookup order.
… passwd: files ldap group: files ldap …
Ensure that the hosts and netmasks lookup entries have cluster listed first in the lookup order.
… hosts: cluster files ldap … netmasks: cluster files ldap …
See Make the Global Zone an LDAP Client in Trusted Extensions in Trusted Extensions Configuration Guide.
Use the Add User wizard in Solaris Management Console as described in Creating Roles and Users in Trusted Extensions in Trusted Extensions Configuration Guide.
Next Steps
Create the zone cluster. Go to How to Create a Zone Cluster.
Perform this procedure to create a cluster of non-global zones.
To modify the zone cluster after it is installed, see Performing Zone Cluster Administrative Tasks in Oracle Solaris Cluster System Administration Guide and the clzonecluster(1CL) man page.
Before You Begin
Create a global cluster. See Chapter 3, Establishing the Global Cluster.
Read the guidelines and requirements for creating a zone cluster. See Zone Clusters.
If the zone cluster will use Trusted Extensions, ensure that you have configured and enabled Trusted Extensions as described in How to Prepare for Trusted Extensions Use With Zone Clusters.
Have available the following information:
The unique name to assign to the zone cluster.
Note - To configure a zone cluster when Trusted Extensions is enabled, you must use the name of the Trusted Extensions security label that the zone cluster will use as the name of the zone cluster itself. Create a separate zone cluster for each Trusted Extensions security label that you want to use.
The zone path that the nodes of the zone cluster will use. For more information, see the description of the zonepath property in Resource and Property Types in System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.
The name of each node in the global cluster on which to create a zone-cluster node.
The zone public hostname, or host alias, that you assign to each zone-cluster node.
If applicable, the public-network IPMP group that each zone-cluster node uses.
If applicable, the name of the public-network adapter that each zone-cluster node uses to connect to the public network.
Note - If you do not configure an IP address for each zone cluster node, two things will occur:
That specific zone cluster will not be able to configure NAS devices for use in the zone cluster. The cluster uses the IP address of the zone cluster node when communicating with the NAS device, so not having an IP address prevents cluster support for fencing NAS devices.
The cluster software will activate any Logical Host IP address on any NIC.
Note - Perform all steps of this procedure from a node of the global cluster.
If any node is in noncluster mode, changes that you make are propagated when the node returns to cluster mode. Therefore, you can create a zone cluster even if some global-cluster nodes are in noncluster mode. When those nodes return to cluster mode, the system performs zone-cluster creation tasks on those nodes.
phys-schost# clnode status === Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-2 Online phys-schost-1 Online
phys-schost# clsetup
The Main Menu is displayed.
A zone cluster name can contain ASCII letters (a-z and A-Z), numbers, a dash, or an underscore. The maximum length of the name is 20 characters.
Note - The brand and ip-type properties are set by default and cannot be changed.
You can set the following properties:
|
You can set the following properties:
|
You can set the following properties:
|
You can set the following properties:
|
You can select one or all of the available physical nodes (or hosts), and then configure one zone-cluster node at a time.
You can set the following properties:
|
The network addresses can be used to configure a logical hostname or shared-IP cluster resources in the zone cluster. The network address is in the zone cluster global scope.
The results of your configuration change are displayed, similar to the following:
>>> Result of the Creation for the Zone Cluster(sczone) <<< The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone create set brand=cluster set zonepath=/zones/sczone set ip-type=shared set enable_priv_net=true add capped-memory set physical=2G end add node set physical-host=phys-schost-1 set hostname=zc-host-1 add net set address=172.1.1.1 set physical=net0 end end add net set address=172.1.1.2 end Zone cluster, zc2 has been created and configured successfully. Continue to install the zone cluster(yes/no) ?
The clsetup utility performs a standard installation of a zone cluster and you cannot specify any options.
The verify subcommand checks for the availability of the specified resources. If the clzonecluster verify command succeeds, there is no output.
phys-schost-1# clzonecluster verify zoneclustername phys-schost-1# clzonecluster status zoneclustername === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone HostName Status Zone Status ---- --------- ------------- ------ ----------- zone basenode1 zone-1 Offline Configured basenode2 zone-2 Offline Configured
From the global zone, launch the txzonemgr GUI.
phys-schost# txzonemgr
Select the global zone, then select the item, Configure per-zone name service.
phys-schost-1# clzonecluster install [-c config-profile.xml] zoneclustername Waiting for zone install commands to complete on all the nodes of the zone cluster "zoneclustername"...
The -c config-profile.xml option specifies a configuration profile for all non-global zones of the zone cluster. Using this option changes only the hostname of the zone, which is unique for each zone in the zone cluster. All profiles must have a .xml extension.
Installation of the zone cluster might take several minutes phys-schost-1# clzonecluster boot zoneclustername Waiting for zone boot commands to complete on all the nodes of the zone cluster "zoneclustername"...
Perform the following steps on each zone-cluster node.
Note - In the following steps, the non-global zone zcnode and zone-cluster-name share the same name.
phys-schost# zlogin zcnode zcnode# sysconfig unconfigure zcnode# reboot
The zlogin session terminates during the reboot.
phys-schost# zlogin -C zcnode
For information about methods to exit from a non-global zone, see How to Exit a Non-Global Zone in System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.
Perform this step on each node of the zone cluster.
phys-schost# cat /etc/cluster/nodeid N
Ensure that the SMF service has been imported and all services are up before you log in.
The cluster software automatically assigns these IP addresses when the cluster software configures a zone cluster.
In the ifconfig -a output, locate the clprivnet0 logical interface that belongs to the zone cluster. The value for inetis the IP address that was assigned to support the use of the cluster private interconnect by this zone cluster.
zc1# ifconfig -a lo0:3: flags=20010008c9<UP,LOOPBACK,RUNNING,NOARP,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 zone zc1 inet 127.0.0.1 netmask ff000000 bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 10.11.166.105 netmask ffffff00 broadcast 10.11.166.255 groupname sc_ipmp0 ether 0:3:ba:19:fa:b7 ce0: flags=9040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER> mtu 1500 index 4 inet 10.11.166.109 netmask ffffff00 broadcast 10.11.166.255 groupname sc_ipmp0 ether 0:14:4f:24:74:d8 ce0:3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4 zone zc1 inet 10.11.166.160 netmask ffffff00 broadcast 10.11.166.255 clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7 inet 172.16.0.18 netmask fffffff8 broadcast 172.16.0.23 ether 0:0:0:0:0:2 clprivnet0:3: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 7 zone zc1 inet 172.16.0.22 netmask fffffffc broadcast 172.16.0.23
The hostname for the private interconnect, which is clusternodeN-priv, where N is the global-cluster node ID
172.16.0.22 clusternodeN-priv
Each net resource that was specified to the clzonecluster command when you created the zone cluster
Use the Security Templates wizard in Solaris Management Console as described in How to Construct a Remote Host Template in Trusted Extensions Administrator’s Procedures to perform the following tasks.
Create a new entry for IP addresses used by zone-cluster components and assign each entry a CIPSO template.
Add entries for each of the following IP addresses that exist in the zone-cluster node's /etc/inet/hosts file:
Each zone-cluster node private IP address
All cl_privnet IP addresses in the zone cluster
Each logical-hostname public IP address for the zone cluster
Each shared-address public IP address for the zone cluster
Entries would look similar to the following.
127.0.0.1:cipso 172.16.4.1:cipso 172.16.4.2:cipso …
Add an entry to make the default template internal.
0.0.0.0:internal
For more information about CIPSO templates, see Configure the Domain of Interpretation in Trusted Extensions Configuration Guide.
Perform the following commands on each node of the zone cluster.
phys-schost# zlogin zcnode zcnode# svcadm enable svc:/network/dns/client:default zcnode# svcadm enable svc:/network/login:rlogin zcnode# reboot
Example 6-2 Configuration File to Create a Zone Cluster
The following example shows the contents of a command file that can be used with the clzonecluster utility to create a zone cluster. The file contains the series of clzonecluster commands that you would input manually.
In the following configuration, the zone cluster sczone is created on the global-cluster node phys-schost-1. The zone cluster uses /zones/sczone as the zone path and public IP address 172.16.2.2. The first node of the zone cluster is assigned the hostname zc-host-1 and uses the network address 172.16.0.1 and the bge0 adapter. The second node of the zone cluster is created on the global-cluster node phys-schost-2. This second zone-cluster node is assigned the hostname zc-host-2 and uses the network address 172.16.0.2 and the bge1 adapter.
create set zonepath=/zones/sczone add net set address=172.16.2.2 end add node set physical-host=phys-schost-1 set hostname=zc-host-1 add net set address=172.16.0.1 set physical=bge0 end end add sysid set root_password=encrypted_password end add node set physical-host=phys-schost-2 set hostname=zc-host-2 add net set address=172.16.0.2 set physical=bge1 end end commit exit
Next Steps
To add the use of a file system to the zone cluster, go to Adding File Systems to a Zone Cluster.
To add the use of global storage devices to the zone cluster, go to Adding Storage Devices to a Zone Cluster.
See Also
To patch a zone cluster, follow procedures in Chapter 11, Patching Oracle Solaris Cluster Software and Firmware, in Oracle Solaris Cluster System Administration Guide. These procedures include special instructions for zone clusters, where needed.
This section provides procedures to add file systems for use by the zone cluster.
After a file system is added to a zone cluster and brought online, the file system is authorized for use from within that zone cluster. To mount the file system for use, configure the file system by using cluster resources such as SUNW.HAStoragePlus or SUNW.ScalMountPoint.
The following procedures are in this section:
In addition, to configure a ZFS storage pool to be highly available in a zone cluster, see How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Perform this procedure to configure a highly available local file system on the global cluster for use by the zone cluster. The file system is added to the zone cluster and is configured with an HAStoragePlus resource to make the local file system highly available.
Perform all steps of the procedure from a node of the global cluster.
Note - Perform all steps of the procedure from a node of the global cluster.
phys-schost# clsetup
The Main Menu is displayed.
Tip - To return to a previous screen, type the < key and press Return.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
The File System Selection for the Zone Cluster menu is displayed.
The file systems in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify all properties for a file system.
The Mount Type Selection menu is displayed.
The File System Properties for the Zone Cluster menu is displayed.
When, finished, type d and press Return.
The results of your configuration change are displayed.
phys-schost# clzonecluster show -v zoneclustername
Example 6-3 Adding a Highly Available Local File System to a Zone Cluster
This example adds the local file system /global/oracle/d1 for use by the sczone zone cluster.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/global/oracle/d1 clzc:sczone:fs> set special=/dev/md/oracle/dsk/d1 clzc:sczone:fs> set raw=/dev/md/oracle/rdsk/d1 clzc:sczone:fs> set type=ufs clzc:sczone:fs> add options [logging] clzc:sczone:fs> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: fs dir: /global/oracle/d1 special: /dev/md/oracle/dsk/d1 raw: /dev/md/oracle/rdsk/d1 type: ufs options: [logging] cluster-control: [true] …
Next Steps
Configure the file system to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file system on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
Perform this procedure to add a ZFS storage pool for use by a zone cluster. The pool can be local to a single zone-cluster node or configured with HAStoragePlus to be highly available.
The clsetup utility discovers and displays all configured ZFS pools on the shared disks that can be accessed by the nodes where the selected zone cluster is configured. After you use the clsetup utility to add a ZFS storage pool in cluster scope to an existing zone cluster, you can use the clzonecluster command to modify the configuration or to add a ZFS storage pool in node-scope.
Before You Begin
Ensure that the ZFS pool is connected on shared disks that are connected to all nodes of the zone cluster. See Oracle Solaris ZFS Administration Guide for procedures to create a ZFS pool.
Note - Perform all steps of this procedure from a node of the global zone.
phys-schost# clsetup
The Main Menu is displayed.
Tip - To return to a previous screen, type the < key and press Return.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
The ZFS Pool Selection for the Zone Cluster menu is displayed.
The ZFS pools in the list are those that are configured on the shared disks and can be accessed by the nodes where the zone cluster is configured. You can also type e to manually specify properties for a ZFS pool.
The ZFS Pool Dataset Property for the Zone Cluster menu is displayed. The selected ZFS pool is assigned to the name property.
The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.
The results of your configuration change are displayed. For example:
>>> Result of Configuration Change to the Zone Cluster(sczone) <<< Adding file systems or storage devices to sczone zone cluster... The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone add dataset set name=myzpool5 end Configuration change to sczone zone cluster succeeded.
phys-schost# clzonecluster show -v zoneclustername
Example 6-4 Adding a ZFS Storage Pool to a Zone Cluster
The following example shows the ZFS storage pool zpool1 added to the zone cluster sczone.
phys-schost-1# clzonecluster configure sczone clzc:sczone> add dataset clzc:sczone:dataset> set name=zpool1 clzc:sczone:dataset> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … Resource Name: dataset name: zpool1 …
Next Steps
Configure the ZFS storage pool to be highly available by using an HAStoragePlus resource. The HAStoragePlus resource manages the mounting of the file systems that are in the pool on the zone-cluster node that currently host the applications that are configured to use the file system. See Enabling Highly Available Local File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
The clsetup utility discovers and displays the available file systems that are configured on the cluster nodes where the selected zone cluster is configured. When you use the clsetup utility to add a file system, the file system is added in cluster scope.
You can add the following types of cluster file systems to a zone cluster:
UFS cluster file system - You specify the file system type in the /etc/vfstab file, using the global mount option. This file system can be located on the shared disk or on a Solaris Volume Manager device.
Sun QFS shared file system - You specify the file system type in the /etc/vfstab file, using the shared mount option.
Note - At this time, QFS shared file systems are only supported for use in clusters that are configured with Oracle Real Application Clusters (RAC). On clusters that are not configured with Oracle RAC, you can use a single-machine QFS file system that is configured as a highly available local file system.
ACFS - Discovered automatically, based on the ORACLE_HOME path you provide.
Before You Begin
Ensure that the cluster file system you want to add to the zone cluster is configured. See Planning Cluster File Systems and Chapter 5, Creating a Cluster File System.
Note - Perform all steps of this procedure from a voting node of the global cluster.
phys-schost# vi /etc/vfstab
/dev/md/datadg/dsk/d0 /dev/md/datadg/rdsk/d0 /global/fs ufs 2 no global, logging
Data-cz1 - /db_qfs/Data1 samfs - no shared,notrace
phys-schost# clsetup
The Main Menu is displayed.
Tip - To return to a previous screen, type the < key and press Return.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
The File System Selection for the Zone Cluster menu is displayed.
The Mount Type Selection menu is displayed.
You can also type e to manually specify all properties for a file system.
Note - If you are using an ACFS file system, type a to select Discover ACFS and then specify the ORACLE_HOME directory.
Note - If you chose an ACFS file system in Step 8, the clsetup utility skips this step because ACFS supports only the direct mount type.
For more information about creating loopback file systems, see How to Create and Mount an LOFS File System in System Administration Guide: Devices and File Systems.
The File System Properties for the Zone Cluster menu is displayed.
Type the number for the dir property and press Return. Then type the LOFS mount point directory name in the New Value field and press Return.
When finished, type d and press Return. The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.
The results of your configuration change are displayed. For example:
>>> Result of Configuration Change to the Zone Cluster(sczone) <<< Adding file systems or storage devices to sczone zone cluster... The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone add fs set dir=/dev/md/ddg/dsk/d9 set special=/dev/md/ddg/dsk/d10 set raw=/dev/md/ddg/rdsk/d10 set type=lofs end Configuration change to sczone zone cluster succeeded.
phys-schost# clzonecluster show -v zoneclustername
Next Steps
(Optional) Configure the cluster file system to be managed by an HAStoragePlus resource. The HAStoragePlus resource manages by mounting the file system in the global cluster, and later performing a loopback mount on the zone-cluster nodes that currently host the applications that are configured to use the file system. For more information, see Configuring an HAStoragePlus Resource for Cluster File Systems in Oracle Solaris Cluster Data Services Planning and Administration Guide.
This section describes how to add file systems that are dedicated to a single zone-cluster node. To instead configure file systems for use by the entire zone cluster, go to Adding File Systems to a Zone Cluster.
This section contains the following procedures:
How to Add a Local File System to a Specific Zone-Cluster Node
How to Add a Local ZFS Storage Pool to a Specific Zone-Cluster Node
Perform this procedure to add a local file system to a single, specific zone-cluster node of a specific zone cluster. The file system is not managed by Oracle Solaris Cluster software but is instead passed to the underlying Oracle Solaris zone.
Note - To add a highly available local file system to a zone cluster, perform procedures in How to Add a Highly Available Local File System to a Zone Cluster.
Note - Perform all steps of the procedure from a node of the global cluster.
Use local disks of the global-cluster node that hosts the intended zone-cluster node.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> select node physical-host=baseclusternode clzc:zoneclustername:node> add fs clzc:zoneclustername:node:fs> set dir=mountpoint clzc:zoneclustername:node:fs> set special=disk-device-name clzc:zoneclustername:node:fs> set raw=raw-disk-device-name clzc:zoneclustername:node:fs> set type=FS-type clzc:zoneclustername:node:fs> end clzc:zoneclustername:node> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
Specifies the file-system mount point
Specifies the name of the disk device
Specifies the name of the raw-disk device
Specifies the type of file system
Note - Enable logging for UFS file systems.
phys-schost# clzonecluster show -v zoneclustername
Example 6-5 Adding a Local File System to a Zone-Cluster Node
This example adds a local UFS file system /local/data for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.
phys-schost-1# clzonecluster configure sczone clzc:sczone> select node physical-host=phys-schost-1 clzc:sczone:node> add fs clzc:sczone:node:fs> set dir=/local/data clzc:sczone:node:fs> set special=/dev/md/localdg/dsk/d1 clzc:sczone:node:fs> set raw=/dev/md/localdg/rdsk/d1 clzc:sczone:node:fs> set type=ufs clzc:sczone:node:fs> add options [logging] clzc:sczone:node:fs> end clzc:sczone:node> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … --- Solaris Resources for phys-schost-1 --- … Resource Name: fs dir: /local/data special: /dev/md/localdg/dsk/d1 raw: /dev/md/localdg/rdsk/d1 type: ufs options: [logging] cluster-control: false ...
Perform this procedure to add a local ZFS storage pool to a specific zone-cluster node. The local ZFS pool is not managed by Oracle Solaris Cluster software but is instead passed to the underlying Oracle Solaris zone.
Note - To add a highly available local ZFS pool to a zone cluster, see How to Add a Highly Available Local File System to a Zone Cluster.
Perform all steps of the procedure from a node of the global cluster.
Note - Perform all steps of the procedure from a node of the global cluster.
Use local disks of the global-cluster node that hosts the intended zone-cluster node.
phys-schost# clzonecluster configure zoneclustername clzc:zoneclustername> select node physical-host=baseclusternode clzc:zoneclustername:node> add dataset clzc:zoneclustername:node:dataset> set name=localZFSpoolname clzc:zoneclustername:node:dataset> end clzc:zoneclustername:node> end clzc:zoneclustername> verify clzc:zoneclustername> commit clzc:zoneclustername> exit
Specifies the name of the local ZFS pool
phys-schost# clzonecluster show -v zoneclustername
Example 6-6 Adding a Local ZFS Pool to a Zone-Cluster Node
This example adds the local ZFS pool local_pool for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.
phys-schost-1# clzonecluster configure sczone clzc:sczone> select node physical-host=phys-schost-1 clzc:sczone:node> add dataset clzc:sczone:node:dataset> set name=local_pool clzc:sczone:node:dataset> end clzc:sczone:node> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … --- Solaris Resources for phys-schost-1 --- … Resource Name: dataset name: local_pool
This section describes how to add the direct use of global storage devices by a zone cluster or add storage devices that are dedicated to a single zone-cluster node. Global devices are devices that can be accessed by more than one node in the cluster, either one node at a time or multiple nodes concurrently.
After a device is added to a zone cluster, the device is visible only from within that zone cluster.
This section contains the following procedures:
Perform this procedure to add one of the following types of storage devices in cluster scope:
Raw-disk devices
Solaris Volume Manager disk sets (not including multi-owner)
Note - To add a raw-disk device to a specific zone-cluster node, go instead to How to Add a Raw-Disk Device to a Specific Zone--Cluster Node.
The clsetup utility discovers and displays the available storage devices that are configured on the cluster nodes where the selected zone cluster is configured. After you use the clsetup utility to add a storage device to an existing zone cluster , use the clzonecluster command to modify the configuration. For instructions on using the clzonecluster command to remove a storage device from a zone cluster, see How to Remove a Storage Device From a Zone Cluster in Oracle Solaris Cluster System Administration Guide.
Note - Perform all steps of the procedure from a node of the global cluster.
phys-schost# cldevicegroup status
phys-schost# cldevicegroup online device
phys-schost# clsetup
The Main Menu is displayed.
Tip - To return to a previous screen, type the < key and press Return.
The Zone Cluster Tasks Menu is displayed.
The Select Zone Cluster menu is displayed.
The Storage Type Selection menu is displayed.
A list of the available devices is displayed.
You can also type e to manually specify properties for a storage device.
The Storage Device Property for the Zone Cluster menu is displayed.
Note - An asterisk (*) is used as a wildcard character in the path name.
When, finished, type d and press Return. The Review File Systems/Storage Devices for the Zone Cluster menu is displayed.
The results of your configuration change are displayed. For example:
>>> Result of Configuration Change to the Zone Cluster(sczone) <<< Adding file systems or storage devices to sczone zone cluster... The zone cluster is being created with the following configuration /usr/cluster/bin/clzonecluster configure sczone add device set match=/dev/md/ddg/*dsk/* end add device set match=/dev/md/shared/1/*dsk/* end Configuration change to sczone zone cluster succeeded. The change will become effective after the zone cluster reboots.
phys-schost# clzonecluster show -v zoneclustername
Perform this procedure to add a raw-disk device to a specific zone-cluster node. This device would not be under Oracle Solaris Cluster control. Perform all steps of the procedure from a node of the global cluster.
Note - To add a raw-disk device for use by the full zone cluster, go instead to How to Add a Global Storage Device to a Zone Cluster.
Note - Perform all steps of the procedure from a node of the global cluster.
Note - An asterisk (*) is used as a wildcard character in the path name.
phys-schost# clzonecluster configure zone-cluster-name clzc:zone-cluster-name> select node physical-host=baseclusternode clzc:zone-cluster-name:node> add device clzc:zone-cluster-name:node:device> set match=/dev/*dsk/cNtXdYs* clzc:zone-cluster-name:node:device> end clzc:zone-cluster-name:node> end clzc:zone-cluster-name> verify clzc:zone-cluster-name> commit clzc:zone-cluster-name> exit
Specifies the full device path of the raw-disk device
phys-schost# clzonecluster show -v zoneclustername
Example 6-7 Adding a Raw-Disk Device to a Specific Zone-Cluster Node
The following example adds the raw–disk device c1t1d0s0 for use by a node of the sczone zone cluster. This zone-cluster node is hosted on global—cluster node phys-schost-1.
phys-schost-1# clzonecluster configure sczone clzc:sczone> select node physical-host=phys-schost-1 clzc:sczone:node> add device clzc:sczone:node:device> set match=/dev/*dsk/c1t1d0s0 clzc:sczone:node:device> end clzc:sczone:node> end clzc:sczone> verify clzc:sczone> commit clzc:sczone> exit phys-schost-1# clzonecluster show -v sczone … --- Solaris Resources for phys-schost-1 --- … Resource Name: device name: /dev/*dsk/c1t1d0s0