This chapter provides procedures for how to establish a cluster.
The following procedures are in this chapter:
How to Configure Open HA Cluster Software on All Nodes (scinstall)
How to Verify the Quorum Configuration and Installation Mode
How to Configure iSCSI Storage Using COMSTAR and Single Paths
How to Configure iSCSI Storage Using COMSTAR and Multiple Paths
How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect
How to Configure the HA-Containers Zone Boot Component for ipkg Brand Zones
This section provides information and procedures to establish a new cluster.
Perform this procedure from one node of the cluster to configure Open HA Cluster software on both nodes of the cluster.
This procedure uses the interactive form of the scinstall command. To use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.
Perform the following tasks:
Ensure that Open HA Cluster software packages are installed on each node. See How to Install Open HA Cluster 2009.06 Software.
Determine which mode of the scinstall utility you will use, Typical or Custom.
Use Custom mode to have the scinstall utility create a new virtual network interface (VNIC) for the cluster private interconnect.
You can use either Typical or Custom mode if you have preconfigured VNICs.
For the Typical installation of Open HA Cluster software, scinstall automatically specifies the following configuration defaults.
Component |
Default Value |
---|---|
Private-network address |
172.16.0.0 |
Private-network netmask |
255.255.240.0 |
Cluster-transport adapters |
Exactly two adapters |
Cluster-transport switches |
switch1 and switch2 |
Global fencing |
Enabled |
Global-devices file-system name |
Looks for a /globaldevices partition, then prompts you to configure a lofi device |
Installation security (DES) |
Limited |
Complete one of the following cluster configuration worksheets, depending on whether you run the scinstall utility in Typical mode or Custom mode.
Typical Mode Worksheet – If you will use Typical mode and accept all defaults, complete the following worksheet.
Custom Mode Worksheet – If you will use Custom mode and customize the configuration data, complete the following worksheet.
For the global-devices file system, use only a lofi device. Do not attempt to configure a dedicated /globaldevices partition. Respond “No” to all prompts that ask whether to use or create a file system. After you decline to configure a file system, the scinstall utility prompts you to create a lofi device.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
On each node to configure in a cluster, become superuser.
Alternatively, if your user account is assigned the Primary Administrator profile, execute commands as non-root through a profile shell, or prefix the command with the pfexec command.
On each node, disable Network Auto-Magic (NWAM).
NWAM activates a single network interface and disables all others. For this reason, NWAM cannot coexist with Open HA Cluster 2009.06 software and you must disable it before you configure or run your cluster.
On each cluster node, determine whether NWAM is enabled or disabled.
phys-schost# svcs -a | grep /network/physical |
If NWAM is enabled, output is similar to the following:
online Mar_13 svc:/network/physical:nwam disabled Mar_13 svc:/network/physical:default |
If NWAM is disabled, output is similar to the following:
disabled Mar_13 svc:/network/physical:nwam online Mar_13 svc:/network/physical:default |
If NWAM is enabled on a node, disable it.
phys-schost# svcadm disable svc:/network/physical:nwam phys-schost# svcadm enable svc:/network/physical:default |
On each node, configure each public-network adapter.
Determine which adapters are on the system.
phys-schost# dladm show-link |
Plumb an adapter.
phys-schost# ifconfig adapter plumb up |
Assign an IP address and netmask to the adapter.
phys-schost# ifconfig adapter IPaddress netmask + netmask |
Verify that the adapter is up.
Ensure that the comment output contains the UP flag.
phys-schost# ifconfig -a |
Create a configuration file for the adapter.
This file ensures that the configuration of the adapter persists across reboots.
phys-schost# vi /etc/hostname.adapter IPaddress |
Repeat Step b through Step e for each public-network adapter on both nodes.
On both nodes, add an entry to the /etc/inet/hosts file for each public-network adapter that you configured on each node.
phys-schost# vi /etc/inet/hosts hostname IPaddress |
If you use a naming service, add the hostname and IP address of each public-network adapter that you configured.
Reboot each node.
phys-schost# /usr/sbin/shutdown -y -g0 -i6 |
Verify that all adapters are configured and up.
phys-schost# ifconfig -a |
On each node, enable the minimal RPC services that are necessary to enable the interactive scinstall utility.
When OpenSolaris software is installed, a restricted network profile is automatically configured. This profile is too restrictive for the cluster private network to function. To enable private-network functionality, run the following commands:
phys-schost# svccfg svc:> select network/rpc/bind svc:/network/rpc/bind> setprop config/local_only=false svc:/network/rpc/bind> quit phys-schost# svcadm refresh network/rpc/bind:default phys-schost# svcprop network/rpc/bind:default | grep local_only |
The output of the last command should show that the local_only property is now set to false.
For more information about re-enabling network services, see Planning Network Security in Solaris 10 5/08 Installation Guide: Planning for Installation and Upgrade.
From one cluster node, start the scinstall utility.
phys-schost# /usr/cluster/bin/scinstall |
Type the option number for Create a New Cluster or Add a Cluster Node and press the Return key.
*** Main Menu *** Please select from one of the following (*) options: * 1) Create a new cluster or add a cluster node * 2) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1 |
The New Cluster and Cluster Node Menu is displayed.
Type the option number for Create a New Cluster and press the Return key.
The Typical or Custom Mode menu is displayed.
Type the option number for either Typical or Custom and press the Return key.
The Create a New Cluster screen is displayed. Read the requirements, then press Control-D to continue.
Follow the menu prompts to supply your answers from the configuration planning worksheet.
The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Open HA Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
Verify on each node that multiuser services for the Service Management Facility (SMF) are online.
If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.
phys-schost# svcs multi-user-server STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default |
From one node, verify that all nodes have joined the cluster.
phys-schost# /usr/cluster/bin/clnode status |
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online |
For more information, see the clnode(1CL) man page.
(Optional) Enable the automatic node reboot feature.
This feature automatically reboots a node if all monitored disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.
Enable automatic reboot.
phys-schost# /usr/cluster/bin/clnode set -p reboot_on_path_failure=enabled |
Specifies the property to set
Enables automatic node reboot if failure of all monitored disk paths occurs.
Verify that automatic reboot on disk-path failure is enabled.
phys-schost# /usr/cluster/bin/clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled … |
If you intend to use the HA for NFS data service on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.
To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.
exclude:lofs |
The change to the /etc/system file becomes effective after the next system reboot.
You cannot have LOFS enabled if you use the HA for NFS data service on a highly available local file system and have automountd running. LOFS can cause switchover problems for the HA for NFS data service. If you choose to add the HA for NFS data service on a highly available local file system, you must make one of the following configuration changes.
Disable LOFS.
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by the HA for NFS data service. This choice enables you to keep both LOFS and the automountd daemon enabled.
See The Loopback File System in System Administration Guide: Devices and File Systems for more information about loopback file systems.
The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall utility in Typical Mode. The other cluster node is phys-schost-2. The adapter name is e1000g0. No /globaldevices partition exists, so the global-devices namespace is created on a lofi device. Automatic quorum-device selection is not used.
*** Create a New Cluster *** Tue Apr 14 10:36:19 PDT 2009 Attempting to contact "phys-schost-1" ... Searching for a remote configuration method ... scrcmd -N phys-schost-1 test isfullyinstalled The Sun Cluster framework software is installed. scrcmd to "phys-schost-1" - return status 1. rsh phys-schost-1 -n "/bin/sh -c '/bin/true; /bin/echo SC_COMMAND_STATUS=\$?'" phys-schost-1: Connection refused rsh to "phys-schost-1" failed. ssh root@phys-schost-1 -o "BatchMode yes" -o "StrictHostKeyChecking yes" -n "/bin/sh -c '/bin/true; /bin/echo SC_COMMAND_STATUS=\$?'" No RSA host key is known for phys-schost-1 and you have requested strict checking. Host key verification failed. ssh to "phys-schost-1" failed. The Sun Cluster framework is able to complete the configuration process without remote shell access. Checking the status of service network/physical:nwam ... /usr/cluster/lib/scadmin/lib/cmd_test isnwamenabled scrcmd -N phys-schost-1 test isnwamenabled Plumbing network address 172.16.0.0 on adapter e1000g0 >> NOT DUPLICATE ... done Plumbing network address 172.16.0.0 on adapter e1000g0 >> NOT DUPLICATE ... done Testing for "/globaldevices" on "phys-schost-2" ... /globaldevices is not a directory or file system mount point. Cannot use "/globaldevices" on "phys-schost-2". Testing for "/globaldevices" on "phys-schost-1" ... scrcmd -N phys-schost-1 chk_globaldev fs /globaldevices /globaldevices is not a directory or file system mount point. /globaldevices is not a directory or file system mount point. Cannot use "/globaldevices" on "phys-schost-1". scrcmd -N phys-schost-1 chk_globaldev lofi /.globaldevices 100m ---------------------------------- - Cluster Creation - ---------------------------------- Started cluster check on "phys-schost-2". Started cluster check on "phys-schost-1". cluster check completed with no errors or warnings for "phys-schost-2". cluster check completed with no errors or warnings for "phys-schost-1". Cluster check report is displayed … scrcmd -N phys-schost-1 test isinstalling "" is not running. scrcmd -N phys-schost-1 test isconfigured Sun Cluster is not configured. Configuring "phys-schost-1" ... scrcmd -N phys-schost-1 install -logfile /var/cluster/logs/install/scinstall.log.2895 -k -C schost -F -G lofi -T node=phys-schost-2,node=phys-schost-1,authtype=sys -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10, numvirtualclusters=12 -A trtype=dlpi,name=e1000g0 -B type=direct ips_package_processing: ips_postinstall... ips_package_processing: ips_postinstall done Initializing cluster name to "schost" ... done Initializing authentication options ... done Initializing configuration for adapter "e1000g0" ... done Initializing private network address options ... done Plumbing network address 172.16.0.0 on adapter e1000g0 >> NOT DUPLICATE ... done Setting the node ID for "phys-schost-1" ... done (id=1) Verifying that NTP is configured ... done Initializing NTP configuration ... done Updating nsswitch.conf ... done Adding cluster node entries to /etc/inet/hosts ... done Configuring IP multipathing groups ...done Verifying that power management is NOT configured ... done Unconfiguring power management ... done /etc/power.conf has been renamed to /etc/power.conf.041409104821 Power management is incompatible with the HA goals of the cluster. Please do not attempt to re-configure power management. Ensure network routing is disabled ... done Network routing has been disabled on this node by creating /etc/notrouter. Having a cluster node act as a router is not supported by Sun Cluster. Please do not re-enable network routing. Please reboot this machine. Log file - /var/cluster/logs/install/scinstall.log.2895 scrcmd -N phys-schost-1 test hasbooted This node has not yet been booted as a cluster node. Rebooting "phys-schost-1" ... |
Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Uninstall Open HA Cluster Software on each misconfigured node to remove it from the cluster configuration. Then rerun this procedure.
If you did not yet configure a quorum device in your cluster, go to How to Configure Quorum Devices.
Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.
If you chose automatic quorum configuration when you established the cluster, do not perform this procedure. Instead, proceed to How to Verify the Quorum Configuration and Installation Mode.
Perform this procedure one time only, after the new cluster is fully formed. Use this procedure to assign quorum votes and then to remove the cluster from installation mode.
If you intend to configure a quorum server as a quorum device, do the following:
Install the Quorum Server software on the quorum server host machine and start the quorum server. For information about installing and starting the quorum server, see How to Install and Configure Quorum Server Software.
Ensure that network switches that are directly connected to cluster nodes meet one of the following criteria:
The switch supports Rapid Spanning Tree Protocol (RSTP).
Fast port mode is enabled on the switch.
One of these features is required to ensure immediate communication between cluster nodes and the quorum server. If this communication is significantly delayed by the switch, the cluster interprets this prevention of communication as loss of the quorum device.
Have available the following information:
A name to assign to the configured quorum device
The IP address of the quorum server host machine
The port number of the quorum server
If you intend to use a quorum server and the public network uses variable-length subnetting, also called Classless Inter-Domain Subnetting (CIDS), on each node of the cluster modify netmask file entries for the public network.
If you use classful subnets, as defined in RFC 791, you do not need to perform this step.
Add to the /etc/inet/netmasks file an entry for each public subnet that the cluster uses.
The following is an example entry that contains a public-network IP address and netmask:
10.11.30.0 255.255.255.0 |
Append netmask + broadcast + to the hostname entry in each /etc/hostname.adapter file.
nodename netmask + broadcast + |
On one node, become superuser.
Alternatively, if your user account is assigned the Primary Administrator profile, execute commands as non-root through a profile shell, or prefix the command with the pfexec command.
To use a shared disk as a quorum device, verify device connectivity to the cluster nodes and choose the device to configure.
From one node of the cluster, display a list of all the devices that the system checks.
You do not need to be logged in as superuser to run this command.
phys-schost-1# /usr/cluster/bin/cldevice list -v |
Output resembles the following:
DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t0d0 d2 phys-schost-1:/dev/rdsk/c0t6d0 d3 phys-schost-2:/dev/rdsk/c1t1d0 d3 phys-schost-1:/dev/rdsk/c1t1d0 … |
Ensure that the output shows all connections between cluster nodes and storage devices.
Determine the global device-ID name of each shared disk that you are configuring as a quorum device.
Any shared disk that you choose must be qualified for use as a quorum device.
Use the scdidadm output from Step a to identify the device–ID name of each shared disk that you are configuring as a quorum device. For example, the output in Step a shows that global device d3 is shared by phys-schost-1 and phys-schost-2.
To use a shared disk that does not support the SCSI protocol, ensure that fencing is disabled for that shared disk.
Display the fencing setting for the individual disk.
phys-schost# /usr/cluster/bin/cldevice show device === DID Device Instances === DID Device Name: /dev/did/rdsk/dN … default_fencing: nofencing … |
If fencing for the disk is set to nofencing or nofencing-noscrub, fencing is disabled for that disk. Go to Step 5.
If fencing for the disk is set to pathcount or scsi, disable fencing for the disk. Skip to Step c.
If fencing for the disk is set to global, determine whether fencing is also disabled globally. Proceed to Step b.
Alternatively, you can simply disable fencing for the individual disk, which overrides for that disk whatever value the global_fencing property is set to. Skip to Step c to disable fencing for the individual disk.
Determine whether fencing is disabled globally.
phys-schost# /usr/cluster/bin/cluster show -t global === Cluster === Cluster name: cluster … global_fencing: nofencing … |
If global fencing is set to nofencing or nofencing-noscrub, fencing is disabled for the shared disk whose default_fencing property is set to global. Go to Step 5.
If global fencing is set to pathcount or prefer3, disable fencing for the shared disk. Proceed to Step c.
If an individual disk has its default_fencing property set to global, the fencing for that individual disk is disabled only while the cluster-wide global_fencing property is set to nofencing or nofencing-noscrub. If the global_fencing property is changed to a value that enables fencing, then fencing becomes enabled for all disks whose default_fencing property is set to global.
Disable fencing for the shared disk.
phys-schost# /usr/cluster/bin/cldevice set \ -p default_fencing=nofencing-noscrub device |
Verify that fencing for the shared disk is now disabled.
phys-schost# /usr/cluster/bin/cldevice show device |
phys-schost# /usr/cluster/bin/clsetup |
The Initial Cluster Setup screen is displayed.
If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 10.
At the prompt Do you want to add any quorum disks?, type Yes.
Specify what type of device you want to configure as a quorum device.
NAS devices are not a supported option for quorum devices in an Open HA Cluster 2009.06 configuration. Reference to NAS devices in the following table are for information only.
Quorum Device Type |
Description |
---|---|
shared_disk |
Sun NAS device or shared disk |
quorum_server |
Quorum server |
netapp_nas |
Network Appliance NAS device |
Specify the name of the device to configure as a quorum device.
For a quorum server, also specify the following information:
The IP address of the quorum server host
The port number that is used by the quorum server to communicate with the cluster nodes
At the prompt Is it okay to reset "installmode"?, type Yes.
After the clsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.
Quit the clsetup utility.
Verify the quorum configuration and that installation mode is disabled. Go to How to Verify the Quorum Configuration and Installation Mode.
Interrupted clsetup processing - If the quorum setup process is interrupted or fails to be completed successfully, rerun clsetup.
Perform this procedure to verify that quorum configuration was completed successfully and that cluster installation mode is disabled.
You do not need to be superuser to run these commands.
From any node, verify the device and node quorum configurations.
phys-schost% /usr/cluster/bin/clquorum list |
Output lists each quorum device, if used, membership type, and each node.
From any node, verify that cluster installation mode is disabled.
phys-schost% /usr/cluster/bin/cluster show -t global | grep installmode installmode: disabled |
Cluster installation and creation is complete.
If you want to configure a failover ZFS file system that uses COMSTAR iSCSI storage, go to one of the following procedures:
How to Configure iSCSI Storage Using COMSTAR and Single Paths
How to Configure iSCSI Storage Using COMSTAR and Multiple Paths
Otherwise, if you want to use IP Security Architecture (IPsec) to provide secure TCP/IP communication on the cluster interconnect, go to How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect.
Otherwise, configure the data services that you want to run on your cluster. Go to Configuring Data Services.
Perform this procedure to configure OpenSolaris Common Multiprotocol SCSI TARget (COMSTAR) on locally attached storage, to share access among multiple cluster nodes. This procedure uses single paths between iSCSI initiators and iSCSI targets and also configures a mirrored ZFS storage pool to provide high availability.
If you use multiple paths between iSCSI initiators and iSCSI targets, instead go to How to Configure iSCSI Storage Using COMSTAR and Multiple Paths.
Ensure that the storage configuration meets Open HA Cluster 2009.06 requirements. See iSCSI Storage.
On each node, perform the required procedures from Configuring an iSCSI Storage Array With COMSTAR (Task Map) that are listed in the following table, observing the Special Instructions.
Task |
Documentation |
Special Instructions |
||
---|---|---|---|---|
1. Perform basic setup. |
To create the SCSI logical unit, perform the procedure How to Create a Disk Partition SCSI Logical Unit. If you specify a whole disk instead of a slice to the sbdadm create-lu command, run the cldevice clear command afterwards to clear the DID namespace. |
|||
2. Configure iSCSI target ports. |
Create a target for each private-network adapter on each node. |
|||
3. Configure the iSCSI target. |
Use either static discovery or SendTargets. Do not use dynamic discovery. |
|||
4. Make a logical unit available. | ||||
5. Configure an initiator system to access target storage. |
|
Disable fencing for each newly created device.
phys-schost# /usr/cluster/bin/cldevice set -p default_fencing=nofencing-noscrub device |
Alternatively, disable fencing globally for all devices in the cluster. Do this if there are no shared devices in the cluster that are being used as a quorum device.
phys-schost# /usr/cluster/bin/cluster set -p global_fencing=nofencing-noscrub |
List the DID mappings for the devices in the cluster.
Output is similar to the following, which shows a path from each node to each device:
phys-schost# /usr/cluster/bin/cldevice list -v DID Device Full Device Path ---------- ---------------- … d3 phys-schost-1:/dev/rdsk/c14t1d0s4 d3 phys-schost-2:/dev/rdsk/c14t1d0s4 d4 phys-schost-1:/dev/rdsk/c15t8d0s4 d4 phys-schost-2:/dev/rdsk/c15t8d0s4 … |
From one node, create a mirrored ZFS storage pool from the DID devices that you created on each node.
For the device path name, combine /dev/did/dsk/, the DID device name, and slice s2.
phys-schost# zpool create pool mirror /dev/did/dsk/dNs2 /dev/did/dsk/dYs2 |
Configure the mirrored ZFS storage pool as an HAStoragePlus resource.
phys-schost# /usr/cluster/bin/clresourcegroup resourcegroup phys-schost# /usr/cluster/bin/clresourcetype register HAStoragePlus phys-schost# /usr/cluster/bin/clresource create -g resourcegroup -t HASToragePlus \ -p Zpools=pool resource phys-schost# /usr/cluster/bin/clresourcegroup manage resourcegroup phys-schost# /usr/cluster/bin/clresourcegroup online resourcegroup |
This example shows the steps involved to configure COMSTAR based iSCSI storage and a mirrored ZFS storage pool, zpool-1. The locally attached disk for the node phys-schost-1 is /dev/rdsk/c1t0d0s4 and for phys-schost-2 is /dev/rdsk/c1t8d0s4. The IP address of the clprivnet0 interface is 172.16.4.1.
Static discovery of the iSCSI target is configured. Procedures performed on phys-schost-1 to configure an iSCSI initiator and target are also performed on phys-schost-2. After the devfsadm command attaches the disks as iSCSI targets, /dev/rdsk/c1t0d0s4 becomes /dev/rdsk/c14t0d0s4 on the initiator side and /dev/rdsk/c1t8d0s4 becomes /dev/rdsk/c15t8d0s4.
The cluster does not use any shared disks, so fencing is turned off globally for all disks in the cluster. The resource group rg-1 is configured with HAStoragePlus resource hasp-rs the mirrored ZFS storage pool zpool-1.
Enable and verify the STMF service phys-schost-1# svcadm enable stmf phys-schost-1# svcs stmf online 15:59:53 svc:/system/stmf:default Repeat on phys-schost-2 Create and verify disk-partition SCSI logical units on each node phys-schost-1# sbdadm create-lu /dev/rdsk/c1t0d0s4 Created the following LU: GUID DATA SIZE SOURCE -------------------------------- ------------------- ------------------ 600144f05b4c460000004a1d9dd00001 73407800320 /dev/rdsk/c1t0d0s4 root@phys-schost-1:# ------------------------- phys-schost-2# sbdadm create-lu /dev/rdsk/c1t8d0s4 Created the following LU: GUID DATA SIZE SOURCE -------------------------------- ------------------- ------------------ 600144f07d15cd0000004a202e340001 73407800320 /dev/rdsk/c1t8d0s4 root@phys-schost-2:# ------------------------- Enable the iSCSI target SMF service phys-schost-1# svcadm enable -r svc:/network/iscsi/target:default phys-schost-1# svcs -a | grep iscsi online 14:21:25 svc:/network/iscsi/target:default Repeat on phys-schost-2 Configure each iSCSI target for static discovery phys-schost-1# itadm create-target Target: iqn.1986-03.com.sun:02:97c1caa8-5732-ec53-b7a2-a722a946fead successfully created phys-schost-1# itadm list-target TARGET NAME STATE SESSIONS iqn.1986-03.com.sun:02:97c1caa8-5732-ec53-b7a2-a722a946fead online 0 Repeat on phys-schost-2 for the other iSCSI target Make the logical units available phys-schost-1# sbdadm list-lu phys-schost-1# stmfadm add-view 600144f05b4c460000004a1d9dd00001 Repeat on phys-schost-2 for the other logical unit's GUID Configure iSCSI initiators to access target storage phys-schost-1# iscsiadm modify discovery --static enable phys-schost-1# iscsiadm list discovery Discovery: Static: enabled Send Targets: disabled iSNS: disabled phys-schost-1# ifconfig clprivnet0 clprivnet0: … inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255 … phys-schost-1# iscsiadm add static-config \ iqn.1986-03.com.sun:02:97c1caa8-5732-ec53-b7a2-a722a946fead,172.16.4.1 phys-schost-1# iscsiadm list static-config Static Configuration Target: iqn.1986-03.com.sun:02:97c1caa8-5732-ec53-b7a2-a722a946fead,172.16.4.1:3260 phys-schost-1# devfsadm -i iscsi phys-schost-1# format -e phys-schost-1# iscsiadm list target Target: iqn.1986-03.com.sun:02:97c1caa8-5732-ec53-b7a2-a722a946fead Alias: - TPGT: 1 ISID: 4000002a0000 Connections: 1 Repeat on phys-schost-2 for this target Repeat on both nodes for the other target Update and populate the global-devices namespace on each node phys-schost-1# scdidadm -r phys-schost-1# cldevice populate Repeat on phys-schost-2 Disable fencing for all disks in the cluster phys-schost-1# /usr/cluster/bin/cluster set -p global_fencing=nofencing-noscrub Create a mirrored ZFS storage pool phys-schost-1/usr/cluster/bin/cldevice list -v DID Device Full Device Path ---------- ---------------- … d3 phys-schost-1:/dev/rdsk/c14t0d0s4 d3 phys-schost-2:/dev/rdsk/c14t0d0s4 d4 phys-schost-1:/dev/rdsk/c15t8d0s4 d4 phys-schost-2:/dev/rdsk/c15t8d0s4 … phys-schost-1# zpool create zpool-1 mirror /dev/did/dsk/d3s2 /dev/did/dsk/d4s2 Configure the mirrored ZFS storage pool as an HAStoragePlus resource phys-schost# /usr/cluster/bin/clresourcegroup rg-1 phys-schost# /usr/cluster/bin/clresourcetype register HAStoragePlus phys-schost# /usr/cluster/bin/clresource create -g rg-1 -t HAStoragePlus \ -p Zpools=zpool-1 hasp-rs phys-schost# /usr/cluster/bin/clresourcegroup manage rg-1 phys-schost# /usr/cluster/bin/clresourcegroup online rg-1 |
If you want to use IP Security Architecture (IPsec) to provide secure TCP/IP communication on the cluster interconnect, go to How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect.
Otherwise, configure the data services that you want to run on your cluster. Go to Configuring Data Services.
Perform this procedure to configure OpenSolaris Common Multiprotocol SCSI TARget (COMSTAR) on locally attached storage, to share access among multiple cluster nodes. This procedure uses multiple paths between iSCSI initiators and iSCSI targets and also configures a mirrored ZFS storage pool to provide high availability. This procedure optionally includes configuring the I/O multipathing feature (MPxIO).
If you use single paths between iSCSI initiators and iSCSI targets, go instead to How to Configure iSCSI Storage Using COMSTAR and Single Paths.
Ensure that the storage configuration meets Open HA Cluster 2009.06 requirements. See iSCSI Storage.
(Optional) If you intend to use I/O multipathing (MPxIO), on each node ensure that the I/O multipathing feature is enabled for iSCSI.
The feature is enabled when the mpxio-disable property is set to no.
phys-schost# cat /kernel/drv/iscsi.conf … mpxio-disable="no"; |
For more information about I/O multipathing, see Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.
Determine the IP address of each adapter that is used for the private interconnect.
You will specify these addresses later when you create iSCSI target ports. Output is similar to the following:
phys-schost# /usr/cluster/bin/clinterconnect status === Cluster Transport Paths === Endpoint1 Endpoint2 Status --------- --------- ------ phys-schost-1:adapter1 phys-schost-2:adapter1 Path online phys-schost-1:adapter2 phys-schost-2:adapter2 Path online phys-schost# ifconfig adapter1 nge1: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 3 inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127 ether 0:14:4f:8d:9b:3 phys-schost# ifconfig adapter2 e1000g1: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 4 inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255 ether 0:15:17:35:9b:a1 |
On each node, perform the procedures that are listed in Configuring an iSCSI Storage Array With COMSTAR (Task Map).
Observe the following additional instructions when you configure a COMSTAR iSCSI target in an Open HA Cluster 2009.06 configuration:
Task |
Documentation |
Special Instructions |
||
---|---|---|---|---|
1. Perform basic setup. |
To create the SCSI logical unit, perform the procedure How to Create a Disk Partition SCSI Logical Unit. If you specify a whole disk instead of a slice to the sbdadm create-lu command, run the cldevice clear command afterwards to clear the DID namespace. |
|||
2. Configure iSCSI target ports. |
Create a target for each private-network adapter on each node. |
|||
3. Configure the iSCSI target. |
Use either static discovery or SendTargets. Do not use dynamic discovery. |
|||
4. Make a logical unit available. | ||||
5. Configure an initiator system to access target storage. |
|
Disable fencing for each of the newly created devices.
phys-schost# /usr/cluster/bin/cldevice set -p default_fencing=nofencing-noscrub device |
From one node, create a mirrored ZFS storage pool from the DID devices that you created on each node.
phys-schost# zpool create pool mirror /dev/did/dsk/dNsX /dev/did/dsk/dYsX |
From one node, configure the mirrored ZFS storage pool as an HAStoragePlus resource.
phys-schost# /usr/cluster/bin/clresourcegroup resourcegroup phys-schost# /usr/cluster/bin/clresourcetype register HAStoragePlus phys-schost# /usr/cluster/bin/clresource create -g resourcegroup -t HASToragePlus \ -p Zpools=pool resource phys-schost# /usr/cluster/bin/clresourcegroup manage resourcegroup phys-schost# /usr/cluster/bin/clresourcegroup online resourcegroup |
If you want to use IP Security Architecture (IPsec) to provide secure TCP/IP communication on the cluster interconnect, go to How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect.
Otherwise, configure the data services that you want to run on your cluster. Go to Configuring Data Services.
You can configure IP Security Architecture (IPsec) for the private-interconnect interface to provide secure TCP/IP communication on the cluster interconnect.
For information about IPsec, see Part IV, IP Security, in System Administration Guide: IP Services and the ipsecconf(1M) man page. For information about the clprivnet interface, see the clprivnet(7) man page.
Perform this procedure on each cluster node that you want to configure to use IPsec.
Become superuser.
Alternatively, if your user account is assigned the Primary Administrator profile, execute commands as non-root through a profile shell, or prefix the command with the pfexec command.
On each node, determine the IP address of the clprivnet interface.
phys-schost# ifconfig clprivnet0 |
If you use virtual NICs (VNICs) to route private interconnect communication over the public network, also determine the IP address of the physical interfaces that the VNICs use.
Display the status of all transport paths in the cluster and the physical interfaces that are used.
Output is similar to the following:
phys-schost# /usr/cluster/bin/clinterconnect status -- Cluster Transport Paths -- Endpoint Endpoint Status -------- -------- ------ Transport path: phys-schost-1:adapter1 phys-schost-2:adapter1 Path online Transport path: phys-schost-1:adapter2 phys-schost-2:adapter2 Path online |
Identify the IP address of each interface that is used on each node.
phys-schost-1# ifconfig adapter phys-schost-2# ifconfig adapter |
On each node, configure the /etc/inet/ipsecinit.conf policy file and add Security Associations (SAs) between each pair of private-interconnect IP addresses that you want to use IPsec.
Follow the instructions in How to Secure Traffic Between Two Systems With IPsec in System Administration Guide: IP Services. In addition, observe the following guidelines:
Ensure that the values of the configuration parameters for these addresses are consistent on all the partner nodes.
Configure each policy as a separate line in the configuration file.
To implement IPsec without rebooting, follow the instructions in the procedure's example, Securing Traffic With IPsec Without Rebooting.
For more information about the sa unique policy, see the ipsecconf(1M) man page.
In each file, add one entry for each clprivnet IP address in the cluster to use IPsec.
Include the clprivnet private-interconnect IP address of the local node.
If you use VNICs, also add one entry for the IP address of each physical interface that is used by the VNICs.
(Optional) To enable striping of data over all links, include the sa unique policy in the entry.
This feature helps the driver to optimally utilize the bandwidth of the cluster private network, which provides a high granularity of distribution and better throughput. The private-interconnect interface uses the Security Parameter Index (SPI) of the packet to stripe the traffic.
On each node, edit the /etc/inet/ike/config file to set the p2_idletime_secs parameter.
Add this entry to the policy rules that are configured for cluster transports. This setting provides the time for security associations to be regenerated when a cluster node reboots, and limits how quickly a rebooted node can rejoin the cluster. A value of 30 seconds should be adequate.
phys-schost# vi /etc/inet/ike/config … { label "clust-priv-interconnect1-clust-priv-interconnect2" … p2_idletime_secs 30 } … |
Configure the data services that you want to run on your cluster. Go to Configuring Data Services.
This section provides information to configure data services that are supported with Open HA Cluster 2009.06 software.
The following table lists the location of information to install and configure each supported data service. Use these procedures to configure data services for the Open HA Cluster 2009.06 release, except for the following changes:
Install application software as described by the application's installation instructions for OpenSolaris environments.
Install the data-service agent by following instructions in How to Prepare to Download Open HA Cluster Software and How to Install Open HA Cluster 2009.06 Software.
Data Service |
Documentation |
---|---|
Data Service for Apache | |
Data Service for Apache Tomcat |
Sun Cluster Data Service for Apache Tomcat Guide for Solaris OS |
Data Service for DHCP | |
Data Service for DNS | |
Data Service for Glassfish |
Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS |
Data Service for Kerberos | |
Data Service for MySQL | |
Data Service for NFS | |
Data Service for Samba | |
Data Service for Solaris Containers |
How to Configure the HA-Containers Zone Boot Component for ipkg Brand Zones Sun Cluster Data Service for Solaris Containers Guide for Solaris OS |
Perform this procedure to configure the zone boot component (sczbt) of the Solaris Containers data service to use ipkg brand non-global zones. Use this procedure instead of the instructions for sczbt that are in Sun Cluster Data Service for Solaris Containers Guide for Solaris OS. All other procedures in the Solaris Containers data-service manual are valid for an Open HA Cluster 2009.06 configuration.
Become superuser on one node of the cluster.
Alternatively, if your user account is assigned the Primary Administrator profile, execute commands as non-root through a profile shell, or prefix the command with the pfexec command.
Create a resource group.
phys-schost-1# /usr/cluster/bin/clresourcegroup create resourcegroup |
Create a mirrored ZFS storage pool to be used for the HA zone root path.
phys-schost-1# zpool create -m mountpoint pool mirror /dev/rdsk/cNtXdY \ /dev/rdsk/cNtXdZ phys-schost# zpool export pool |
Register the HAStoragePlus resource type.
phys-schost-1# /usr/cluster/bin/clresourcetype register SUNW.HAStoragePlus |
Create an HAStoragePlus resource.
Specify the ZFS storage pool and the resource group that you created.
phys-schost-1# /usr/cluster/bin/clresource create -t SUNW.HAStoragePlus \ -g resourcegroup -p Zpools=pool hasp-resource |
Bring the resource group online.
phys-schost-1# clresourcegroup online -eM resourcegroup |
Create a ZFS file-system dataset on the ZFS storage pool that you created.
You will use this file system as the zone root path for the ipkg brand zone that you create later in this procedure.
phys-schost-1# zfs create pool/filesystem |
Ensure that the universally unique ID (UUID) of each node's boot-environment (BE) root dataset is the same value.
Determine the UUID of the node where you initially created the zone.
Output is similar to the following.
phys-schost-1# beadm list -H … b101b-SC;8fe53702-16c3-eb21-ed85-d19af92c6bbd;NR;/;756… |
In this example output, the UUID is 8fe53702-16c3-eb21-ed85-d19af92c6bbd and the BE is b101b-SC.
Set the same UUID on the second node.
phys-schost-2# zfs set org.opensolaris.libbe:uuid=uuid rpool/ROOT/BE |
On both nodes, configure the ipkg brand non-global zone.
Set the zone root path to the file system that you created on the ZFS storage pool.
phys-schost# zonecfg -z zonename \ 'create ; set zonepath=/pool/filesystem/zonename ; set autoboot=false' phys-schost# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zonename configured /pool/filesystem/zonename ipkg shared |
From the node that masters the HAStoragePlus resource, install the ipkg brand non-global zone.
Output is similar to the following:
Determine which node masters the HAStoragePlus resource.
phys-schost# /usr/cluster/bin/clresource status === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- ------- hasp-resource phys-schost-1 Online Online phys-schost-2 Offline Offline |
Perform the remaining tasks in this step from the node that masters the HAStoragePlus resource.
Install the zone on the node that masters the HAStoragePlus resource for the ZFS storage pool.
phys-schost-1# zoneadm -z zonename install |
Verify that the zone is installed.
phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zonename installed /pool/filesystem/zonename ipkg shared |
Boot the zone that you created and verify that the zone is running.
phys-schost-1# zoneadm -z zonename boot phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zonename running /pool/filesystem/zonename ipkg shared |
Open a new terminal window and log in to the zone.
Halt the zone.
The zone's status should return to installed.
phys-schost-1# zoneadm -z zonename halt |
Switch the resource group to the other node and forcibly attach the zone.
Switch over the resource group.
Output is similar to the following, where phys-schost-1 is the node that currently masters the resource group and phys-schost-2 is the node to which you switch the resource group.
phys-schost-1# /usr/cluster/bin/clresourcegroup switch -n phys-schost-2 resourcegroup |
Perform the remaining tasks in this step from the node to which you switch the resource group.
Forcibly attach the zone to the node to which you switched the resource group.
phys-schost-2# zoneadm -z zonename attach -F |
Verify that the zone is installed on the node.
Output is similar to the following:
phys-schost-2# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zonename installed /pool/filesystem/zonename ipkg shared |
Boot the zone.
phys-schost-2# zoneadm -z zonename boot |
Open a new terminal window and log in to the zone.
Perform this step to verify that the zone is functional.
phys-schost-2# zlogin -C zonename |
Halt the zone.
phys-schost-2# zoneadm -z zonename halt |
From one node, configure the zone-boot (sczbt) resource.
Register the SUNW.gds resource type.
phys-schost-1# /usr/cluster/bin/clresourcetype register SUNW.gds |
Create a directory on the ZFS file system that you created.
You will specify this directory to store the parameter values that you set for the zone-boot resource.
phys-schost-1# mkdir /pool/filesystem/parameterdir |
Install and configure the HA-Containers agent.
phys-schost# pkg install SUNWsczone phys-schost# cd /opt/SUNWsczone/sczbt/util phys-schost# cp -p sczbt_config sczbt_config.zoneboot-resource phys-schost# vi sczbt_config.zoneboot-resource Add or modify the following entries in the file. RS="zoneboot-resource" RG="resourcegroup" PARAMETERDIR="/pool/filesystem/parameterdir" SC_NETWORK="false" SC_LH="" FAILOVER="true" HAS_RS="hasp-resource" Zonename="zonename" Zonebrand="ipkg" Zonebootopt="" Milestone="multi-user-server" LXrunlevel="3" SLrunlevel="3" Mounts="" Save and exit the file. |
Configure the zone-boot resource.
The resource is configured with the parameters that you set in the zone-boot configuration file.
phys-schost-1# ./sczbt_register -f ./sczbt_config.zoneboot-resource |
Verify that the zone-boot resource is enabled.
phys-schost-1# /usr/cluster/bin/clresource enable zoneboot-resource |
Verify that the resource group can switch to another node and the ZFS storage pool successfully starts there after the switchover.
Switch the resource group to another node.
phys-schost-2# /usr/cluster/bin/clresourcegroup switch -n phys-schost-1 resourcegroup |
Verify that the resource group is now online on the new node.
Output is similar to the following:
phys-schost-1# /usr/cluster/bin/clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ resourcegroup phys-schost-1 No Online phys-schost-2 No Offline |
Verify that the zone is running on the new node.
phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 1 zonename running /pool/filesystem/zonename ipkg shared |
This example creates the HAStoragePlus resource hasp-rs, which uses a mirrored ZFS storage pool hapool in the resource group zone-rg. The storage pool is mounted on the /hapool/ipkg file system. The hasp-rs resource runs on the ipkg brand non-global zone ipkgzone1, which is configured on both phys-schost-1 and phys-schost-2. The zone-boot resource ipkgzone1-rs is based on the SUNW.gds resource type.
Create a resource group. phys-schost-1# /usr/cluster/bin/clresourcegroup create zone-rg Create a mirrored ZFS storage pool to be used for the HA zone root path. phys-schost-1# zpool create -m /ha-zones hapool mirror /dev/rdsk/c4t6d0 \ /dev/rdsk/c5t6d0 phys-schost# zpool export hapool Create an HAStoragePlus resource that uses the resource group and mirrored ZFS storage pool that you created. phys-schost-1# /usr/cluster/bin/clresourcetype register SUNW.HAStoragePlus phys-schost-1# /usr/cluster/bin/clresource create -t SUNW.HAStoragePlus \ -g zone-rg -p Zpools=hapool hasp-rs Bring the resource group online. phys-schost-1# clresourcegroup online -eM zone-rg Create a ZFS file-system dataset on the ZFS storage pool that you created. phys-schost-1# zfs create hapool/ipkg Ensure that the universally unique ID (UUID) of each node's boot-environment (BE) root dataset is the same value on both nodes. phys-schost-1# beadm list -H … zfsbe;8fe53702-16c3-eb21-ed85-d19af92c6bbd;NR;/;7565844992;static;1229439064 … phys-schost-2# zfs set org.opensolaris.libbe:uuid=8fe53702-16c3-eb21-ed85-d19af92c6bbd rpool/ROOT/zfsbe Configure the ipkg brand non-global zone. phys-schost-1# zonecfg -z ipkgzone1 'create ; \ set zonepath=/hapool/ipkg/ipkgzone1 ; set autoboot=false' phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ipkgzone1 configured /hapool/ipkg/ipkgzone1 ipkg shared Repeat on phys-schost-2. Identify the node that masters the HAStoragePlus resource, and from that node install ipkgzone1. phys-schost-1# /usr/cluster/bin/clresource status === Cluster Resources === Resource Name Node Name Status Message -------------- ---------- ------- ------- hasp-rs phys-schost-1 Online Online phys-schost-2 Offline Offline phys-schost-1# zoneadm -z ipkgzone1 install phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ipkgzone1 installed /hapool/ipkg/ipkgzone1 ipkg shared phys-schost-1# zoneadm -z ipkgzone1 boot phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ipkgzone1 running /hapool/ipkg/ipkgzone1 ipkg shared Open a new terminal window and log in to ipkgzone1. phys-schost-1# zoneadm -z ipkgzone1 halt Switch zone-rg to phys-schost-2 and forcibly attach the zone. phys-schost-1# /usr/cluster/bin/clresourcegroup switch -n phys-schost-2 zone-rg phys-schost-2# zoneadm -z ipkgzone1 attach -F phys-schost-2# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ipkgzone1 installed /hapool/ipkg/ipkgzone1 ipkg shared phys-schost-2# zoneadm -z ipkgzone1 boot Open a new terminal window and log in to ipkgzone1. phys-schost-2# zlogin -C ipkgzone1 phys-schost-2# zoneadm -z ipkgzone1 halt From one node, configure the zone-boot (sczbt) resource. phys-schost-1# /usr/cluster/bin/clresourcetype register SUNW.gds phys-schost-1# mkdir /hapool/ipkg/params Install and configure the HA-Containers agent. phys-schost# pkg install SUNWsczone phys-schost# cd /opt/SUNWsczone/sczbt/util phys-schost# cp -p sczbt_config sczbt_config.ipkgzone1-rs phys-schost# vi sczbt_config.ipkgzone1-rs Add or modify the following entries in the sczbt_config.ipkgzone1-rs file. RS="ipkgzone1-rs" RG="zone-rg" PARAMETERDIR="/hapool/ipkg/params" SC_NETWORK="false" SC_LH="" FAILOVER="true" HAS_RS="hasp-rs" Zonename="ipkgzone1" Zonebrand="ipkg" Zonebootopt="" Milestone="multi-user-server" LXrunlevel="3" SLrunlevel="3" Mounts="" Save and exit the file. Configure the ipkgzone1-rs resource. phys-schost-1# ./sczbt_register -f ./sczbt_config.ipkgzone1-rs phys-schost-1# /usr/cluster/bin/clresource enable ipkgzone1-rs Verify that zone-rg can switch to another node and that ipkgzone1 successfully starts there after the switchover. phys-schost-2# /usr/cluster/bin/clresourcegroup switch -n phys-schost-1 zone-rg phys-schost-1# /usr/cluster/bin/clresourcegroup status === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ zone-rg phys-schost-1 No Online phys-schost-2 No Offline phys-schost-1# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 1 ipkgzone1 running /hapool/ipkg/ipkgzone1 ipkg shared |