This section provides information and procedures to establish a new cluster.
Perform this procedure from one node of the cluster to configure Open HA Cluster software on both nodes of the cluster.
This procedure uses the interactive form of the scinstall command. To use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.
Perform the following tasks:
Ensure that Open HA Cluster software packages are installed on each node. See How to Install Open HA Cluster 2009.06 Software.
Determine which mode of the scinstall utility you will use, Typical or Custom.
Use Custom mode to have the scinstall utility create a new virtual network interface (VNIC) for the cluster private interconnect.
You can use either Typical or Custom mode if you have preconfigured VNICs.
For the Typical installation of Open HA Cluster software, scinstall automatically specifies the following configuration defaults.
Component |
Default Value |
---|---|
Private-network address |
172.16.0.0 |
Private-network netmask |
255.255.240.0 |
Cluster-transport adapters |
Exactly two adapters |
Cluster-transport switches |
switch1 and switch2 |
Global fencing |
Enabled |
Global-devices file-system name |
Looks for a /globaldevices partition, then prompts you to configure a lofi device |
Installation security (DES) |
Limited |
Complete one of the following cluster configuration worksheets, depending on whether you run the scinstall utility in Typical mode or Custom mode.
Typical Mode Worksheet – If you will use Typical mode and accept all defaults, complete the following worksheet.
Custom Mode Worksheet – If you will use Custom mode and customize the configuration data, complete the following worksheet.
For the global-devices file system, use only a lofi device. Do not attempt to configure a dedicated /globaldevices partition. Respond “No” to all prompts that ask whether to use or create a file system. After you decline to configure a file system, the scinstall utility prompts you to create a lofi device.
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
On each node to configure in a cluster, become superuser.
Alternatively, if your user account is assigned the Primary Administrator profile, execute commands as non-root through a profile shell, or prefix the command with the pfexec command.
On each node, disable Network Auto-Magic (NWAM).
NWAM activates a single network interface and disables all others. For this reason, NWAM cannot coexist with Open HA Cluster 2009.06 software and you must disable it before you configure or run your cluster.
On each cluster node, determine whether NWAM is enabled or disabled.
phys-schost# svcs -a | grep /network/physical |
If NWAM is enabled, output is similar to the following:
online Mar_13 svc:/network/physical:nwam disabled Mar_13 svc:/network/physical:default |
If NWAM is disabled, output is similar to the following:
disabled Mar_13 svc:/network/physical:nwam online Mar_13 svc:/network/physical:default |
If NWAM is enabled on a node, disable it.
phys-schost# svcadm disable svc:/network/physical:nwam phys-schost# svcadm enable svc:/network/physical:default |
On each node, configure each public-network adapter.
Determine which adapters are on the system.
phys-schost# dladm show-link |
Plumb an adapter.
phys-schost# ifconfig adapter plumb up |
Assign an IP address and netmask to the adapter.
phys-schost# ifconfig adapter IPaddress netmask + netmask |
Verify that the adapter is up.
Ensure that the comment output contains the UP flag.
phys-schost# ifconfig -a |
Create a configuration file for the adapter.
This file ensures that the configuration of the adapter persists across reboots.
phys-schost# vi /etc/hostname.adapter IPaddress |
Repeat Step b through Step e for each public-network adapter on both nodes.
On both nodes, add an entry to the /etc/inet/hosts file for each public-network adapter that you configured on each node.
phys-schost# vi /etc/inet/hosts hostname IPaddress |
If you use a naming service, add the hostname and IP address of each public-network adapter that you configured.
Reboot each node.
phys-schost# /usr/sbin/shutdown -y -g0 -i6 |
Verify that all adapters are configured and up.
phys-schost# ifconfig -a |
On each node, enable the minimal RPC services that are necessary to enable the interactive scinstall utility.
When OpenSolaris software is installed, a restricted network profile is automatically configured. This profile is too restrictive for the cluster private network to function. To enable private-network functionality, run the following commands:
phys-schost# svccfg svc:> select network/rpc/bind svc:/network/rpc/bind> setprop config/local_only=false svc:/network/rpc/bind> quit phys-schost# svcadm refresh network/rpc/bind:default phys-schost# svcprop network/rpc/bind:default | grep local_only |
The output of the last command should show that the local_only property is now set to false.
For more information about re-enabling network services, see Planning Network Security in Solaris 10 5/08 Installation Guide: Planning for Installation and Upgrade.
From one cluster node, start the scinstall utility.
phys-schost# /usr/cluster/bin/scinstall |
Type the option number for Create a New Cluster or Add a Cluster Node and press the Return key.
*** Main Menu *** Please select from one of the following (*) options: * 1) Create a new cluster or add a cluster node * 2) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1 |
The New Cluster and Cluster Node Menu is displayed.
Type the option number for Create a New Cluster and press the Return key.
The Typical or Custom Mode menu is displayed.
Type the option number for either Typical or Custom and press the Return key.
The Create a New Cluster screen is displayed. Read the requirements, then press Control-D to continue.
Follow the menu prompts to supply your answers from the configuration planning worksheet.
The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Open HA Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
Verify on each node that multiuser services for the Service Management Facility (SMF) are online.
If services are not yet online for a node, wait until the state becomes online before you proceed to the next step.
phys-schost# svcs multi-user-server STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default |
From one node, verify that all nodes have joined the cluster.
phys-schost# /usr/cluster/bin/clnode status |
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online |
For more information, see the clnode(1CL) man page.
(Optional) Enable the automatic node reboot feature.
This feature automatically reboots a node if all monitored disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.
Enable automatic reboot.
phys-schost# /usr/cluster/bin/clnode set -p reboot_on_path_failure=enabled |
Specifies the property to set
Enables automatic node reboot if failure of all monitored disk paths occurs.
Verify that automatic reboot on disk-path failure is enabled.
phys-schost# /usr/cluster/bin/clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled … |
If you intend to use the HA for NFS data service on a highly available local file system, ensure that the loopback file system (LOFS) is disabled.
To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.
exclude:lofs |
The change to the /etc/system file becomes effective after the next system reboot.
You cannot have LOFS enabled if you use the HA for NFS data service on a highly available local file system and have automountd running. LOFS can cause switchover problems for the HA for NFS data service. If you choose to add the HA for NFS data service on a highly available local file system, you must make one of the following configuration changes.
Disable LOFS.
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by the HA for NFS data service. This choice enables you to keep both LOFS and the automountd daemon enabled.
See The Loopback File System in System Administration Guide: Devices and File Systems for more information about loopback file systems.
The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall utility in Typical Mode. The other cluster node is phys-schost-2. The adapter name is e1000g0. No /globaldevices partition exists, so the global-devices namespace is created on a lofi device. Automatic quorum-device selection is not used.
*** Create a New Cluster *** Tue Apr 14 10:36:19 PDT 2009 Attempting to contact "phys-schost-1" ... Searching for a remote configuration method ... scrcmd -N phys-schost-1 test isfullyinstalled The Sun Cluster framework software is installed. scrcmd to "phys-schost-1" - return status 1. rsh phys-schost-1 -n "/bin/sh -c '/bin/true; /bin/echo SC_COMMAND_STATUS=\$?'" phys-schost-1: Connection refused rsh to "phys-schost-1" failed. ssh root@phys-schost-1 -o "BatchMode yes" -o "StrictHostKeyChecking yes" -n "/bin/sh -c '/bin/true; /bin/echo SC_COMMAND_STATUS=\$?'" No RSA host key is known for phys-schost-1 and you have requested strict checking. Host key verification failed. ssh to "phys-schost-1" failed. The Sun Cluster framework is able to complete the configuration process without remote shell access. Checking the status of service network/physical:nwam ... /usr/cluster/lib/scadmin/lib/cmd_test isnwamenabled scrcmd -N phys-schost-1 test isnwamenabled Plumbing network address 172.16.0.0 on adapter e1000g0 >> NOT DUPLICATE ... done Plumbing network address 172.16.0.0 on adapter e1000g0 >> NOT DUPLICATE ... done Testing for "/globaldevices" on "phys-schost-2" ... /globaldevices is not a directory or file system mount point. Cannot use "/globaldevices" on "phys-schost-2". Testing for "/globaldevices" on "phys-schost-1" ... scrcmd -N phys-schost-1 chk_globaldev fs /globaldevices /globaldevices is not a directory or file system mount point. /globaldevices is not a directory or file system mount point. Cannot use "/globaldevices" on "phys-schost-1". scrcmd -N phys-schost-1 chk_globaldev lofi /.globaldevices 100m ---------------------------------- - Cluster Creation - ---------------------------------- Started cluster check on "phys-schost-2". Started cluster check on "phys-schost-1". cluster check completed with no errors or warnings for "phys-schost-2". cluster check completed with no errors or warnings for "phys-schost-1". Cluster check report is displayed … scrcmd -N phys-schost-1 test isinstalling "" is not running. scrcmd -N phys-schost-1 test isconfigured Sun Cluster is not configured. Configuring "phys-schost-1" ... scrcmd -N phys-schost-1 install -logfile /var/cluster/logs/install/scinstall.log.2895 -k -C schost -F -G lofi -T node=phys-schost-2,node=phys-schost-1,authtype=sys -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10, numvirtualclusters=12 -A trtype=dlpi,name=e1000g0 -B type=direct ips_package_processing: ips_postinstall... ips_package_processing: ips_postinstall done Initializing cluster name to "schost" ... done Initializing authentication options ... done Initializing configuration for adapter "e1000g0" ... done Initializing private network address options ... done Plumbing network address 172.16.0.0 on adapter e1000g0 >> NOT DUPLICATE ... done Setting the node ID for "phys-schost-1" ... done (id=1) Verifying that NTP is configured ... done Initializing NTP configuration ... done Updating nsswitch.conf ... done Adding cluster node entries to /etc/inet/hosts ... done Configuring IP multipathing groups ...done Verifying that power management is NOT configured ... done Unconfiguring power management ... done /etc/power.conf has been renamed to /etc/power.conf.041409104821 Power management is incompatible with the HA goals of the cluster. Please do not attempt to re-configure power management. Ensure network routing is disabled ... done Network routing has been disabled on this node by creating /etc/notrouter. Having a cluster node act as a router is not supported by Sun Cluster. Please do not re-enable network routing. Please reboot this machine. Log file - /var/cluster/logs/install/scinstall.log.2895 scrcmd -N phys-schost-1 test hasbooted This node has not yet been booted as a cluster node. Rebooting "phys-schost-1" ... |
Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Uninstall Open HA Cluster Software on each misconfigured node to remove it from the cluster configuration. Then rerun this procedure.
If you did not yet configure a quorum device in your cluster, go to How to Configure Quorum Devices.
Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.
If you chose automatic quorum configuration when you established the cluster, do not perform this procedure. Instead, proceed to How to Verify the Quorum Configuration and Installation Mode.
Perform this procedure one time only, after the new cluster is fully formed. Use this procedure to assign quorum votes and then to remove the cluster from installation mode.
If you intend to configure a quorum server as a quorum device, do the following:
Install the Quorum Server software on the quorum server host machine and start the quorum server. For information about installing and starting the quorum server, see How to Install and Configure Quorum Server Software.
Ensure that network switches that are directly connected to cluster nodes meet one of the following criteria:
The switch supports Rapid Spanning Tree Protocol (RSTP).
Fast port mode is enabled on the switch.
One of these features is required to ensure immediate communication between cluster nodes and the quorum server. If this communication is significantly delayed by the switch, the cluster interprets this prevention of communication as loss of the quorum device.
Have available the following information:
A name to assign to the configured quorum device
The IP address of the quorum server host machine
The port number of the quorum server
If you intend to use a quorum server and the public network uses variable-length subnetting, also called Classless Inter-Domain Subnetting (CIDS), on each node of the cluster modify netmask file entries for the public network.
If you use classful subnets, as defined in RFC 791, you do not need to perform this step.
Add to the /etc/inet/netmasks file an entry for each public subnet that the cluster uses.
The following is an example entry that contains a public-network IP address and netmask:
10.11.30.0 255.255.255.0 |
Append netmask + broadcast + to the hostname entry in each /etc/hostname.adapter file.
nodename netmask + broadcast + |
On one node, become superuser.
Alternatively, if your user account is assigned the Primary Administrator profile, execute commands as non-root through a profile shell, or prefix the command with the pfexec command.
To use a shared disk as a quorum device, verify device connectivity to the cluster nodes and choose the device to configure.
From one node of the cluster, display a list of all the devices that the system checks.
You do not need to be logged in as superuser to run this command.
phys-schost-1# /usr/cluster/bin/cldevice list -v |
Output resembles the following:
DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t0d0 d2 phys-schost-1:/dev/rdsk/c0t6d0 d3 phys-schost-2:/dev/rdsk/c1t1d0 d3 phys-schost-1:/dev/rdsk/c1t1d0 … |
Ensure that the output shows all connections between cluster nodes and storage devices.
Determine the global device-ID name of each shared disk that you are configuring as a quorum device.
Any shared disk that you choose must be qualified for use as a quorum device.
Use the scdidadm output from Step a to identify the device–ID name of each shared disk that you are configuring as a quorum device. For example, the output in Step a shows that global device d3 is shared by phys-schost-1 and phys-schost-2.
To use a shared disk that does not support the SCSI protocol, ensure that fencing is disabled for that shared disk.
Display the fencing setting for the individual disk.
phys-schost# /usr/cluster/bin/cldevice show device === DID Device Instances === DID Device Name: /dev/did/rdsk/dN … default_fencing: nofencing … |
If fencing for the disk is set to nofencing or nofencing-noscrub, fencing is disabled for that disk. Go to Step 5.
If fencing for the disk is set to pathcount or scsi, disable fencing for the disk. Skip to Step c.
If fencing for the disk is set to global, determine whether fencing is also disabled globally. Proceed to Step b.
Alternatively, you can simply disable fencing for the individual disk, which overrides for that disk whatever value the global_fencing property is set to. Skip to Step c to disable fencing for the individual disk.
Determine whether fencing is disabled globally.
phys-schost# /usr/cluster/bin/cluster show -t global === Cluster === Cluster name: cluster … global_fencing: nofencing … |
If global fencing is set to nofencing or nofencing-noscrub, fencing is disabled for the shared disk whose default_fencing property is set to global. Go to Step 5.
If global fencing is set to pathcount or prefer3, disable fencing for the shared disk. Proceed to Step c.
If an individual disk has its default_fencing property set to global, the fencing for that individual disk is disabled only while the cluster-wide global_fencing property is set to nofencing or nofencing-noscrub. If the global_fencing property is changed to a value that enables fencing, then fencing becomes enabled for all disks whose default_fencing property is set to global.
Disable fencing for the shared disk.
phys-schost# /usr/cluster/bin/cldevice set \ -p default_fencing=nofencing-noscrub device |
Verify that fencing for the shared disk is now disabled.
phys-schost# /usr/cluster/bin/cldevice show device |
phys-schost# /usr/cluster/bin/clsetup |
The Initial Cluster Setup screen is displayed.
If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 10.
At the prompt Do you want to add any quorum disks?, type Yes.
Specify what type of device you want to configure as a quorum device.
NAS devices are not a supported option for quorum devices in an Open HA Cluster 2009.06 configuration. Reference to NAS devices in the following table are for information only.
Quorum Device Type |
Description |
---|---|
shared_disk |
Sun NAS device or shared disk |
quorum_server |
Quorum server |
netapp_nas |
Network Appliance NAS device |
Specify the name of the device to configure as a quorum device.
For a quorum server, also specify the following information:
The IP address of the quorum server host
The port number that is used by the quorum server to communicate with the cluster nodes
At the prompt Is it okay to reset "installmode"?, type Yes.
After the clsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.
Quit the clsetup utility.
Verify the quorum configuration and that installation mode is disabled. Go to How to Verify the Quorum Configuration and Installation Mode.
Interrupted clsetup processing - If the quorum setup process is interrupted or fails to be completed successfully, rerun clsetup.
Perform this procedure to verify that quorum configuration was completed successfully and that cluster installation mode is disabled.
You do not need to be superuser to run these commands.
From any node, verify the device and node quorum configurations.
phys-schost% /usr/cluster/bin/clquorum list |
Output lists each quorum device, if used, membership type, and each node.
From any node, verify that cluster installation mode is disabled.
phys-schost% /usr/cluster/bin/cluster show -t global | grep installmode installmode: disabled |
Cluster installation and creation is complete.
If you want to configure a failover ZFS file system that uses COMSTAR iSCSI storage, go to one of the following procedures:
How to Configure iSCSI Storage Using COMSTAR and Single Paths
How to Configure iSCSI Storage Using COMSTAR and Multiple Paths
Otherwise, if you want to use IP Security Architecture (IPsec) to provide secure TCP/IP communication on the cluster interconnect, go to How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect.
Otherwise, configure the data services that you want to run on your cluster. Go to Configuring Data Services.
Perform this procedure to configure OpenSolaris Common Multiprotocol SCSI TARget (COMSTAR) on locally attached storage, to share access among multiple cluster nodes. This procedure uses single paths between iSCSI initiators and iSCSI targets and also configures a mirrored ZFS storage pool to provide high availability.
If you use multiple paths between iSCSI initiators and iSCSI targets, instead go to How to Configure iSCSI Storage Using COMSTAR and Multiple Paths.
Ensure that the storage configuration meets Open HA Cluster 2009.06 requirements. See iSCSI Storage.
On each node, perform the required procedures from Configuring an iSCSI Storage Array With COMSTAR (Task Map) that are listed in the following table, observing the Special Instructions.
Task |
Documentation |
Special Instructions |
||
---|---|---|---|---|
1. Perform basic setup. |
To create the SCSI logical unit, perform the procedure How to Create a Disk Partition SCSI Logical Unit. If you specify a whole disk instead of a slice to the sbdadm create-lu command, run the cldevice clear command afterwards to clear the DID namespace. |
|||
2. Configure iSCSI target ports. |
Create a target for each private-network adapter on each node. |
|||
3. Configure the iSCSI target. |
Use either static discovery or SendTargets. Do not use dynamic discovery. |
|||
4. Make a logical unit available. | ||||
5. Configure an initiator system to access target storage. |
|
Disable fencing for each newly created device.
phys-schost# /usr/cluster/bin/cldevice set -p default_fencing=nofencing-noscrub device |
Alternatively, disable fencing globally for all devices in the cluster. Do this if there are no shared devices in the cluster that are being used as a quorum device.
phys-schost# /usr/cluster/bin/cluster set -p global_fencing=nofencing-noscrub |
List the DID mappings for the devices in the cluster.
Output is similar to the following, which shows a path from each node to each device:
phys-schost# /usr/cluster/bin/cldevice list -v DID Device Full Device Path ---------- ---------------- … d3 phys-schost-1:/dev/rdsk/c14t1d0s4 d3 phys-schost-2:/dev/rdsk/c14t1d0s4 d4 phys-schost-1:/dev/rdsk/c15t8d0s4 d4 phys-schost-2:/dev/rdsk/c15t8d0s4 … |
From one node, create a mirrored ZFS storage pool from the DID devices that you created on each node.
For the device path name, combine /dev/did/dsk/, the DID device name, and slice s2.
phys-schost# zpool create pool mirror /dev/did/dsk/dNs2 /dev/did/dsk/dYs2 |
Configure the mirrored ZFS storage pool as an HAStoragePlus resource.
phys-schost# /usr/cluster/bin/clresourcegroup resourcegroup phys-schost# /usr/cluster/bin/clresourcetype register HAStoragePlus phys-schost# /usr/cluster/bin/clresource create -g resourcegroup -t HASToragePlus \ -p Zpools=pool resource phys-schost# /usr/cluster/bin/clresourcegroup manage resourcegroup phys-schost# /usr/cluster/bin/clresourcegroup online resourcegroup |
This example shows the steps involved to configure COMSTAR based iSCSI storage and a mirrored ZFS storage pool, zpool-1. The locally attached disk for the node phys-schost-1 is /dev/rdsk/c1t0d0s4 and for phys-schost-2 is /dev/rdsk/c1t8d0s4. The IP address of the clprivnet0 interface is 172.16.4.1.
Static discovery of the iSCSI target is configured. Procedures performed on phys-schost-1 to configure an iSCSI initiator and target are also performed on phys-schost-2. After the devfsadm command attaches the disks as iSCSI targets, /dev/rdsk/c1t0d0s4 becomes /dev/rdsk/c14t0d0s4 on the initiator side and /dev/rdsk/c1t8d0s4 becomes /dev/rdsk/c15t8d0s4.
The cluster does not use any shared disks, so fencing is turned off globally for all disks in the cluster. The resource group rg-1 is configured with HAStoragePlus resource hasp-rs the mirrored ZFS storage pool zpool-1.
Enable and verify the STMF service phys-schost-1# svcadm enable stmf phys-schost-1# svcs stmf online 15:59:53 svc:/system/stmf:default Repeat on phys-schost-2 Create and verify disk-partition SCSI logical units on each node phys-schost-1# sbdadm create-lu /dev/rdsk/c1t0d0s4 Created the following LU: GUID DATA SIZE SOURCE -------------------------------- ------------------- ------------------ 600144f05b4c460000004a1d9dd00001 73407800320 /dev/rdsk/c1t0d0s4 root@phys-schost-1:# ------------------------- phys-schost-2# sbdadm create-lu /dev/rdsk/c1t8d0s4 Created the following LU: GUID DATA SIZE SOURCE -------------------------------- ------------------- ------------------ 600144f07d15cd0000004a202e340001 73407800320 /dev/rdsk/c1t8d0s4 root@phys-schost-2:# ------------------------- Enable the iSCSI target SMF service phys-schost-1# svcadm enable -r svc:/network/iscsi/target:default phys-schost-1# svcs -a | grep iscsi online 14:21:25 svc:/network/iscsi/target:default Repeat on phys-schost-2 Configure each iSCSI target for static discovery phys-schost-1# itadm create-target Target: iqn.1986-03.com.sun:02:97c1caa8-5732-ec53-b7a2-a722a946fead successfully created phys-schost-1# itadm list-target TARGET NAME STATE SESSIONS iqn.1986-03.com.sun:02:97c1caa8-5732-ec53-b7a2-a722a946fead online 0 Repeat on phys-schost-2 for the other iSCSI target Make the logical units available phys-schost-1# sbdadm list-lu phys-schost-1# stmfadm add-view 600144f05b4c460000004a1d9dd00001 Repeat on phys-schost-2 for the other logical unit's GUID Configure iSCSI initiators to access target storage phys-schost-1# iscsiadm modify discovery --static enable phys-schost-1# iscsiadm list discovery Discovery: Static: enabled Send Targets: disabled iSNS: disabled phys-schost-1# ifconfig clprivnet0 clprivnet0: … inet 172.16.4.1 netmask fffffe00 broadcast 172.16.5.255 … phys-schost-1# iscsiadm add static-config \ iqn.1986-03.com.sun:02:97c1caa8-5732-ec53-b7a2-a722a946fead,172.16.4.1 phys-schost-1# iscsiadm list static-config Static Configuration Target: iqn.1986-03.com.sun:02:97c1caa8-5732-ec53-b7a2-a722a946fead,172.16.4.1:3260 phys-schost-1# devfsadm -i iscsi phys-schost-1# format -e phys-schost-1# iscsiadm list target Target: iqn.1986-03.com.sun:02:97c1caa8-5732-ec53-b7a2-a722a946fead Alias: - TPGT: 1 ISID: 4000002a0000 Connections: 1 Repeat on phys-schost-2 for this target Repeat on both nodes for the other target Update and populate the global-devices namespace on each node phys-schost-1# scdidadm -r phys-schost-1# cldevice populate Repeat on phys-schost-2 Disable fencing for all disks in the cluster phys-schost-1# /usr/cluster/bin/cluster set -p global_fencing=nofencing-noscrub Create a mirrored ZFS storage pool phys-schost-1/usr/cluster/bin/cldevice list -v DID Device Full Device Path ---------- ---------------- … d3 phys-schost-1:/dev/rdsk/c14t0d0s4 d3 phys-schost-2:/dev/rdsk/c14t0d0s4 d4 phys-schost-1:/dev/rdsk/c15t8d0s4 d4 phys-schost-2:/dev/rdsk/c15t8d0s4 … phys-schost-1# zpool create zpool-1 mirror /dev/did/dsk/d3s2 /dev/did/dsk/d4s2 Configure the mirrored ZFS storage pool as an HAStoragePlus resource phys-schost# /usr/cluster/bin/clresourcegroup rg-1 phys-schost# /usr/cluster/bin/clresourcetype register HAStoragePlus phys-schost# /usr/cluster/bin/clresource create -g rg-1 -t HAStoragePlus \ -p Zpools=zpool-1 hasp-rs phys-schost# /usr/cluster/bin/clresourcegroup manage rg-1 phys-schost# /usr/cluster/bin/clresourcegroup online rg-1 |
If you want to use IP Security Architecture (IPsec) to provide secure TCP/IP communication on the cluster interconnect, go to How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect.
Otherwise, configure the data services that you want to run on your cluster. Go to Configuring Data Services.
Perform this procedure to configure OpenSolaris Common Multiprotocol SCSI TARget (COMSTAR) on locally attached storage, to share access among multiple cluster nodes. This procedure uses multiple paths between iSCSI initiators and iSCSI targets and also configures a mirrored ZFS storage pool to provide high availability. This procedure optionally includes configuring the I/O multipathing feature (MPxIO).
If you use single paths between iSCSI initiators and iSCSI targets, go instead to How to Configure iSCSI Storage Using COMSTAR and Single Paths.
Ensure that the storage configuration meets Open HA Cluster 2009.06 requirements. See iSCSI Storage.
(Optional) If you intend to use I/O multipathing (MPxIO), on each node ensure that the I/O multipathing feature is enabled for iSCSI.
The feature is enabled when the mpxio-disable property is set to no.
phys-schost# cat /kernel/drv/iscsi.conf … mpxio-disable="no"; |
For more information about I/O multipathing, see Solaris Fibre Channel Storage Configuration and Multipathing Support Guide.
Determine the IP address of each adapter that is used for the private interconnect.
You will specify these addresses later when you create iSCSI target ports. Output is similar to the following:
phys-schost# /usr/cluster/bin/clinterconnect status === Cluster Transport Paths === Endpoint1 Endpoint2 Status --------- --------- ------ phys-schost-1:adapter1 phys-schost-2:adapter1 Path online phys-schost-1:adapter2 phys-schost-2:adapter2 Path online phys-schost# ifconfig adapter1 nge1: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 3 inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127 ether 0:14:4f:8d:9b:3 phys-schost# ifconfig adapter2 e1000g1: flags=201008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4,CoS> mtu 1500 index 4 inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255 ether 0:15:17:35:9b:a1 |
On each node, perform the procedures that are listed in Configuring an iSCSI Storage Array With COMSTAR (Task Map).
Observe the following additional instructions when you configure a COMSTAR iSCSI target in an Open HA Cluster 2009.06 configuration:
Task |
Documentation |
Special Instructions |
||
---|---|---|---|---|
1. Perform basic setup. |
To create the SCSI logical unit, perform the procedure How to Create a Disk Partition SCSI Logical Unit. If you specify a whole disk instead of a slice to the sbdadm create-lu command, run the cldevice clear command afterwards to clear the DID namespace. |
|||
2. Configure iSCSI target ports. |
Create a target for each private-network adapter on each node. |
|||
3. Configure the iSCSI target. |
Use either static discovery or SendTargets. Do not use dynamic discovery. |
|||
4. Make a logical unit available. | ||||
5. Configure an initiator system to access target storage. |
|
Disable fencing for each of the newly created devices.
phys-schost# /usr/cluster/bin/cldevice set -p default_fencing=nofencing-noscrub device |
From one node, create a mirrored ZFS storage pool from the DID devices that you created on each node.
phys-schost# zpool create pool mirror /dev/did/dsk/dNsX /dev/did/dsk/dYsX |
From one node, configure the mirrored ZFS storage pool as an HAStoragePlus resource.
phys-schost# /usr/cluster/bin/clresourcegroup resourcegroup phys-schost# /usr/cluster/bin/clresourcetype register HAStoragePlus phys-schost# /usr/cluster/bin/clresource create -g resourcegroup -t HASToragePlus \ -p Zpools=pool resource phys-schost# /usr/cluster/bin/clresourcegroup manage resourcegroup phys-schost# /usr/cluster/bin/clresourcegroup online resourcegroup |
If you want to use IP Security Architecture (IPsec) to provide secure TCP/IP communication on the cluster interconnect, go to How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect.
Otherwise, configure the data services that you want to run on your cluster. Go to Configuring Data Services.
You can configure IP Security Architecture (IPsec) for the private-interconnect interface to provide secure TCP/IP communication on the cluster interconnect.
For information about IPsec, see Part IV, IP Security, in System Administration Guide: IP Services and the ipsecconf(1M) man page. For information about the clprivnet interface, see the clprivnet(7) man page.
Perform this procedure on each cluster node that you want to configure to use IPsec.
Become superuser.
Alternatively, if your user account is assigned the Primary Administrator profile, execute commands as non-root through a profile shell, or prefix the command with the pfexec command.
On each node, determine the IP address of the clprivnet interface.
phys-schost# ifconfig clprivnet0 |
If you use virtual NICs (VNICs) to route private interconnect communication over the public network, also determine the IP address of the physical interfaces that the VNICs use.
Display the status of all transport paths in the cluster and the physical interfaces that are used.
Output is similar to the following:
phys-schost# /usr/cluster/bin/clinterconnect status -- Cluster Transport Paths -- Endpoint Endpoint Status -------- -------- ------ Transport path: phys-schost-1:adapter1 phys-schost-2:adapter1 Path online Transport path: phys-schost-1:adapter2 phys-schost-2:adapter2 Path online |
Identify the IP address of each interface that is used on each node.
phys-schost-1# ifconfig adapter phys-schost-2# ifconfig adapter |
On each node, configure the /etc/inet/ipsecinit.conf policy file and add Security Associations (SAs) between each pair of private-interconnect IP addresses that you want to use IPsec.
Follow the instructions in How to Secure Traffic Between Two Systems With IPsec in System Administration Guide: IP Services. In addition, observe the following guidelines:
Ensure that the values of the configuration parameters for these addresses are consistent on all the partner nodes.
Configure each policy as a separate line in the configuration file.
To implement IPsec without rebooting, follow the instructions in the procedure's example, Securing Traffic With IPsec Without Rebooting.
For more information about the sa unique policy, see the ipsecconf(1M) man page.
In each file, add one entry for each clprivnet IP address in the cluster to use IPsec.
Include the clprivnet private-interconnect IP address of the local node.
If you use VNICs, also add one entry for the IP address of each physical interface that is used by the VNICs.
(Optional) To enable striping of data over all links, include the sa unique policy in the entry.
This feature helps the driver to optimally utilize the bandwidth of the cluster private network, which provides a high granularity of distribution and better throughput. The private-interconnect interface uses the Security Parameter Index (SPI) of the packet to stripe the traffic.
On each node, edit the /etc/inet/ike/config file to set the p2_idletime_secs parameter.
Add this entry to the policy rules that are configured for cluster transports. This setting provides the time for security associations to be regenerated when a cluster node reboots, and limits how quickly a rebooted node can rejoin the cluster. A value of 30 seconds should be adequate.
phys-schost# vi /etc/inet/ike/config … { label "clust-priv-interconnect1-clust-priv-interconnect2" … p2_idletime_secs 30 } … |
Configure the data services that you want to run on your cluster. Go to Configuring Data Services.