This section provides information to configure data services that are supported with Open HA Cluster 2009.06 software.
The following table lists the location of information to install and configure each supported data service. Use these procedures to configure data services for the Open HA Cluster 2009.06 release, except for the following changes:
Install application software as described by the application's installation instructions for OpenSolaris environments.
Install the data-service agent by following instructions in How to Prepare to Download Open HA Cluster Software and How to Install Open HA Cluster 2009.06 Software.
Data Service |
Documentation |
---|---|
Data Service for Apache | |
Data Service for Apache Tomcat |
Sun Cluster Data Service for Apache Tomcat Guide for Solaris OS |
Data Service for DHCP | |
Data Service for DNS | |
Data Service for Glassfish |
Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS |
Data Service for Kerberos | |
Data Service for MySQL | |
Data Service for NFS | |
Data Service for Samba | |
Data Service for Solaris Containers |
How to Configure the HA-Containers Zone Boot Component for ipkg Brand Zones Sun Cluster Data Service for Solaris Containers Guide for Solaris OS |
Perform this procedure to configure the zone boot component (sczbt) of the Solaris Containers data service to use ipkg brand non-global zones. Use this procedure instead of the instructions for sczbt that are in Sun Cluster Data Service for Solaris Containers Guide for Solaris OS. All other procedures in the Solaris Containers data-service manual are valid for an Open HA Cluster 2009.06 configuration.
Become superuser on one node of the cluster.
Alternatively, if your user account is assigned the Primary Administrator profile, execute commands as non-root through a profile shell, or prefix the command with the pfexec command.
Create a resource group.
phys-schost-1# /usr/cluster/bin/clresourcegroup create resourcegroup |
Create a mirrored ZFS storage pool to be used for the HA zone root path.
phys-schost-1# zpool create -m mountpoint pool mirror /dev/rdsk/cNtXdY \ /dev/rdsk/cNtXdZ phys-schost# zpool export pool |
Register the HAStoragePlus resource type.
phys-schost-1# /usr/cluster/bin/clresourcetype register SUNW.HAStoragePlus |
Create an HAStoragePlus resource.
Specify the ZFS storage pool and the resource group that you created.
phys-schost-1# /usr/cluster/bin/clresource create -t SUNW.HAStoragePlus \ -g resourcegroup -p Zpools=pool hasp-resource |
Bring the resource group online.
phys-schost-1# clresourcegroup online -eM resourcegroup |
Create a ZFS file-system dataset on the ZFS storage pool that you created.
You will use this file system as the zone root path for the ipkg brand zone that you create later in this procedure.
phys-schost-1# zfs create pool/filesystem |
Ensure that the universally unique ID (UUID) of each node's boot-environment (BE) root dataset is the same value.
Determine the UUID of the node where you initially created the zone.
Output is similar to the following.
phys-schost-1# beadm list -H … b101b-SC;8fe53702-16c3-eb21-ed85-d19af92c6bbd;NR;/;756… |
In this example output, the UUID is 8fe53702-16c3-eb21-ed85-d19af92c6bbd and the BE is b101b-SC.
Set the same UUID on the second node.
phys-schost-2# zfs set org.opensolaris.libbe:uuid=uuid rpool/ROOT/BE |
On both nodes, configure the ipkg brand non-global zone.
Set the zone root path to the file system that you created on the ZFS storage pool.
phys-schost# zonecfg -z zonename \ 'create ; set zonepath=/pool/filesystem/zonename ; set autoboot=false' phys-schost# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zonename configured /pool/filesystem/zonename ipkg shared |
From the node that masters the HAStoragePlus resource, install the ipkg brand non-global zone.
Output is similar to the following:
Determine which node masters the HAStoragePlus resource.
phys-schost# /usr/cluster/bin/clresource status === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- ------- hasp-resource phys-schost-1 Online Online phys-schost-2 Offline Offline |
Perform the remaining tasks in this step from the node that masters the HAStoragePlus resource.
Install the zone on the node that masters the HAStoragePlus resource for the ZFS storage pool.
phys-schost-1# zoneadm -z zonename install |
Verify that the zone is installed.
phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zonename installed /pool/filesystem/zonename ipkg shared |
Boot the zone that you created and verify that the zone is running.
phys-schost-1# zoneadm -z zonename boot phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zonename running /pool/filesystem/zonename ipkg shared |
Open a new terminal window and log in to the zone.
Halt the zone.
The zone's status should return to installed.
phys-schost-1# zoneadm -z zonename halt |
Switch the resource group to the other node and forcibly attach the zone.
Switch over the resource group.
Output is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.
phys-schost-1# /usr/cluster/bin/clresourcegroup switch -n phys-schost-2 resourcegroup |
Perform the remaining tasks in this step from the node to which you switch the resource group.
Forcibly attach the zone to the node to which you switched the resource group.
phys-schost-2# zoneadm -z zonename attach -F |
Verify that the zone is installed on the node.
Output is similar to the following:
phys-schost-2# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zonename installed /pool/filesystem/zonename ipkg shared |
Boot the zone.
phys-schost-2# zoneadm -z zonename boot |
Open a new terminal window and log in to the zone.
Perform this step to verify that the zone is functional.
phys-schost-2# zlogin -C zonename |
Halt the zone.
phys-schost-2# zoneadm -z zonename halt |
From one node, configure the zone-boot (sczbt) resource.
Register the SUNW.gds resource type.
phys-schost-1# /usr/cluster/bin/clresourcetype register SUNW.gds |
Create a directory on the ZFS file system that you created.
You will specify this directory to store the parameter values that you set for the zone-boot resource.
phys-schost-1# mkdir /pool/filesystem/parameterdir |
Install and configure the HA-Containers agent.
phys-schost# pkg install SUNWsczone phys-schost# cd /opt/SUNWsczone/sczbt/util phys-schost# cp -p sczbt_config sczbt_config.zoneboot-resource phys-schost# vi sczbt_config.zoneboot-resource Add or modify the following entries in the file. RS="zoneboot-resource" RG="resourcegroup" PARAMETERDIR="/pool/filesystem/parameterdir" SC_NETWORK="false" SC_LH="" FAILOVER="true" HAS_RS="hasp-resource" Zonename="zonename" Zonebrand="ipkg" Zonebootopt="" Milestone="multi-user-server" LXrunlevel="3" SLrunlevel="3" Mounts="" Save and exit the file. |
Configure the zone-boot resource.
The resource is configured with the parameters that you set in the zone-boot configuration file.
phys-schost-1# ./sczbt_register -f ./sczbt_config.zoneboot-resource |
Verify that the zone-boot resource is enabled.
phys-schost-1# /usr/cluster/bin/clresource enable zoneboot-resource |
Verify that the resource group can switch to another node and the ZFS storage pool successfully starts there after the switchover.
Switch the resource group to another node.
phys-schost-2# /usr/cluster/bin/clresourcegroup switch -n phys-schost-1 resourcegroup |
Verify that the resource group is now online on the new node.
Output is similar to the following:
phys-schost-1# /usr/cluster/bin/clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ resourcegroup phys-schost-1 No Online phys-schost-2 No Offline |
Verify that the zone is running on the new node.
phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 1 zonename running /pool/filesystem/zonename ipkg shared |
This example creates the HAStoragePlus resource hasp-rs, which uses a mirrored ZFS storage pool hapool in the resource group zone-rg. The storage pool is mounted on the /hapool/ipkg file system. The hasp-rs resource runs on the ipkg brand non-global zone ipkgzone1, which is configured on both phys-schost-1 and phys-schost-2. The zone-boot resource ipkgzone1-rs is based on the SUNW.gds resource type.
Create a resource group. phys-schost-1# /usr/cluster/bin/clresourcegroup create zone-rg Create a mirrored ZFS storage pool to be used for the HA zone root path. phys-schost-1# zpool create -m /ha-zones hapool mirror /dev/rdsk/c4t6d0 \ /dev/rdsk/c5t6d0 phys-schost# zpool export hapool Create an HAStoragePlus resource that uses the resource group and mirrored ZFS storage pool that you created. phys-schost-1# /usr/cluster/bin/clresourcetype register SUNW.HAStoragePlus phys-schost-1# /usr/cluster/bin/clresource create -t SUNW.HAStoragePlus \ -g zone-rg -p Zpools=hapool hasp-rs Bring the resource group online. phys-schost-1# clresourcegroup online -eM zone-rg Create a ZFS file-system dataset on the ZFS storage pool that you created. phys-schost-1# zfs create hapool/ipkg Ensure that the universally unique ID (UUID) of each node's boot-environment (BE) root dataset is the same value on both nodes. phys-schost-1# beadm list -H … zfsbe;8fe53702-16c3-eb21-ed85-d19af92c6bbd;NR;/;7565844992;static;1229439064 … phys-schost-2# zfs set org.opensolaris.libbe:uuid=8fe53702-16c3-eb21-ed85-d19af92c6bbd rpool/ROOT/zfsbe Configure the ipkg brand non-global zone. phys-schost-1# zonecfg -z ipkgzone1 'create ; \ set zonepath=/hapool/ipkg/ipkgzone1 ; set autoboot=false' phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ipkgzone1 configured /hapool/ipkg/ipkgzone1 ipkg shared Repeat on phys-schost-2. Identify the node that masters the HAStoragePlus resource, and from that node install ipkgzone1. phys-schost-1# /usr/cluster/bin/clresource status === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- ------- hasp-rs phys-schost-1 Online Online phys-schost-2 Offline Offline phys-schost-1# zoneadm -z ipkgzone1 install phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ipkgzone1 installed /hapool/ipkg/ipkgzone1 ipkg shared phys-schost-1# zoneadm -z ipkgzone1 boot phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ipkgzone1 running /hapool/ipkg/ipkgzone1 ipkg shared Open a new terminal window and log in to ipkgzone1. phys-schost-1# zoneadm -z ipkgzone1 halt Switch zone-rg to phys-schost-2 and forcibly attach the zone. phys-schost-1# /usr/cluster/bin/clresourcegroup switch -n phys-schost-2 zone-rg phys-schost-2# zoneadm -z ipkgzone1 attach -F phys-schost-2# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ipkgzone1 installed /hapool/ipkg/ipkgzone1 ipkg shared phys-schost-2# zoneadm -z ipkgzone1 boot Open a new terminal window and log in to ipkgzone1. phys-schost-2# zlogin -C ipkgzone1 phys-schost-2# zoneadm -z ipkgzone1 halt From one node, configure the zone-boot (sczbt) resource. phys-schost-1# /usr/cluster/bin/clresourcetype register SUNW.gds phys-schost-1# mkdir /hapool/ipkg/params Install and configure the HA-Containers agent. phys-schost# pkg install SUNWsczone phys-schost# cd /opt/SUNWsczone/sczbt/util phys-schost# cp -p sczbt_config sczbt_config.ipkgzone1-rs phys-schost# vi sczbt_config.ipkgzone1-rs Add or modify the following entries in the sczbt_config.ipkgzone1-rs file. RS="ipkgzone1-rs" RG="zone-rg" PARAMETERDIR="/hapool/ipkg/params" SC_NETWORK="false" SC_LH="" FAILOVER="true" HAS_RS="hasp-rs" Zonename="ipkgzone1" Zonebrand="ipkg" Zonebootopt="" Milestone="multi-user-server" LXrunlevel="3" SLrunlevel="3" Mounts="" Save and exit the file. Configure the ipkgzone1-rs resource. phys-schost-1# ./sczbt_register -f ./sczbt_config.ipkgzone1-rs phys-schost-1# /usr/cluster/bin/clresource enable ipkgzone1-rs Verify that zone-rg can switch to another node and that ipkgzone1 successfully starts there after the switchover. phys-schost-2# /usr/cluster/bin/clresourcegroup switch -n phys-schost-1 zone-rg phys-schost-1# /usr/cluster/bin/clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ zone-rg phys-schost-1 No Online phys-schost-2 No Offline phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 1 ipkgzone1 running /hapool/ipkg/ipkgzone1 ipkg shared |