StorageTek Automated Cartridge System Library Software High Availability 8.3 Cluster Installation, Configuration, and Operation Release 8.3 E51939-02 |
|
![]() Previous |
![]() Next |
Solaris Cluster Installation is covered in detail in the Oracle Solaris Cluster Software Installation Guide, available from the Oracle technetwork site (see "Downloading Software Packages" of this document).
ACSLSHA 8.3 is supported on Solaris 11 with Oracle Solaris Cluster 4.1 and Support Repository Update (SRU-4) for OSC-4.1.
In this procedure, you install Cluster software.
Create a directory, /opt/OSC
.
# mkdir /opt/OSC
Move the downloaded Cluster 4.1 iso image (osc-4_1-ga-repo-full.iso) to the /opt/OSC
directory.
Move the downloaded SRU-4 OSC 4.1 patch image to the /opt/OSC
directory and unzip the file.
Copy the read-only Cluster iso image to a read/write file system.
Create a pseudo device from the iso image:
# /usr/sbin/lofiadm -a /opt/OSC/osc-4_1-ga-repo-full.iso
Observe the device path that is returned and use it in step c.
Create a mount point:
# mkdir /opt/OSC/mnt
Mount the pseudo device to the mount point:
# mount -F hsfs -o ro /dev/lofi/1 /opt/OSC/mnt
Copy the iso image to a temporary read/write file system.
# mkdir /opt/OSC/merged_iso # cp -r /opt/OSC/mnt/repo /opt/OSC/merged_iso
Mount the SRU-4 ISO image to a file system.
# mount -F hsfs /opt/OSC/osc-4_1_4-1-repo-incr.iso /mnt
Merge changes in SRU-4 to the base Cluster 4.1 Release.
Sync the two iso images to the temporary file system.
# rsync -aP /mnt/repo /opt/OSC/merged_iso
Rebuild the search indexes for the repository.
# pkgrepo rebuild -s /opt/OSC/merged_iso/repo
Install Solaris Cluster from the patched iso image
# pkg set-publisher -g file:///opt/OSC/merged_iso/repo ha-cluster # pkg install ha-cluster-full
Repeat steps 1-7 on the adjacent node.
scinstall
RoutineThe Solaris Cluster installation routine make a series of checks between the two nodes to assure that it can monitor system operation from both servers and can control startup and failover actions.
Preliminary Steps:
Before running scinstall
, it is helpful to establish an environment for root
which includes the path to the cluster utilities that have just been installed. Edit the file /root/.profile
. Change the path statement to include /usr/cluster/bin
.
export PATH=/usr/cluster/bin:/usr/bin:/usr/sbin
Be sure to make this change on each node. To inherit the new path, you can log out and log back in, or simply su -
.
Confirm that the config/local_only
property for rpc/bind
is false
# svccfg -s network/rpc/bind listprop config/local_only
If this property returns true, then you must set it to false.
# svccfg -s network/rpc/bind setprop config/local_only=false
Now confirm:
# svccfg -s network/rpc/bind listprop config/local_only
An essential hardware setup requirement for Cluster software is the existence of two private network connections, reserved to assure uninterrupted communication for cluster operation between the two nodes. Figure 2-1, "Single HBCr Library Interface Card Connected to Two Ethernet Ports on each Server Node" shows these physical connections, labeled as (2). Each connection originates from a separate network adapter (NIC) to assure that no single point of failure can interrupt Cluster's internal communication. The scinstall
routine checks each of the two connections to verify that no other network traffic is seen on the wire. Finally, scinstall
verifies that communication is functional between the two lines. Once the physical connection is verified, the routine plumbs each interface to a private internal address beginning with 172.16.
Before running scinstall
, you should verify the assigned network device id for the two network ports on each server that you have set up for this private connection. Run dladm show-phys
to view the interface assignments.
# dladm show-phys
A Logical Host Name and IP address must be established to represent the cluster from either node. This logical host will reliably respond to network communication whether the active host would be running from node1 or node2.
Update the /etc/hosts
file on both nodes to include the logical hostname and logical ip address. This host becomes active when you start ACSLS-HA ("Starting ACSLS HA").
For a successful cluster installation, you must have the Solaris Common Agent Container enabled. Verify that the agent container is enabled.
# cacaoadm status
If the status response indicates that the agent container is DISABLED at system startup, then enable it as follows:
# cacaoadm enable
scinstall
From one of the two nodes, run the command scinstall
, and then follow this procedure:
From the main menu, select Create a new cluster.
From the sub menu, select Create a new cluster.
Accept initial defaults.
Select Typical install.
Assign a name for the cluster, such as acsls_cluster
.
At the Cluster Nodes prompt, enter the hostname of the adjacent node. Accept the node list if it is correct
Define the two private node interconnections you have identified for this purpose. Allow the install routine to plumb tcp links to the physical connections.
Follow the prompts to create the cluster. Unless you have identified a specific device to serve as a quorum device, allow the scinstall
routine to select the quorum device(s).
Don't be alarmed if the utility reports that the cluster check failed on both nodes. A failure is reported even for minor warnings. You should review the report for each node, and look for any serious errors or violations that may be returned. The routine displays the path to a log file which reports details surrounding any errors or warnings encountered during the operation. Review the log file and correct any severe or moderately severe problems that were identified.
Verify that both nodes are included in the cluster.
# clnode list -v Node Type ---- ---- node1 cluster node2 cluster
View the list of devices available to Solaris Cluster.
# cldevice list -v DID Device Full Device Path d1 node1:/dev/rdsk/c0t600A0B800049EDD600000C9952CAA03Ed0 d1 node2:/dev/rdsk/c0t600A0B800049EDD600000C9952CAA03Ed0 d2 node1:/dev/rdsk/c0t600A0B800049EE1A0000832652CAA899d0 d2 node2:/dev/rdsk/c0t600A0B800049EE1A0000832652CAA899d0 d3 node1:/dev/rdsk/c1t0d0 d4 node1:/dev/rdsk/c1t1d0 d5 node2:/dev/rdsk/c1t0d0 d6 node2:/dev/rdsk/c1t1d0
In this example, the shared disk devices are d1 and d2 while d3 and d4 are the node1 boot devices and d5 and d6 are the node2 boot devices. Notice that d1 and d2 are accessible from either node.
A quorum consists of three or more devices. It is used during startup events to determine which node is to become the active node.
Confirm that a full quorum has been configured.
# clquorum list -v Quorum Type ------ ---- d1 shared_disk node1 node node2 node
You can (optionally) add the second shared_disk to the list of quorum devices.
# clquorum add d2 # clquorum list -v Quorum Type ------ ---- d1 shared_disk d2 shared_disk node1 node node2 node
If the shared disk devices are not listed, then you must determine their device id's and then add them to the quorum.
Identify the device id for each shared disk.
# cldevice list -v
Run clsetup
to add the quorum devices.
# clsetup Select '1' for quorum. Select '1' to dd a quorum device. Select 'yes' to continue. Select 'Directly attached shared disk' Select 'yes' to continue. Enter the device id (d<n>) for the first shared drive. Answer 'yes' to add another quorum device. Enter the device id for the second shared drive.
Run clquorum
show
to confirm the quorum membership.
# clquorum show
Review overall cluster configuration.
# cluster check -v | egrep -v "not applicable|passed"
Look for any violated instances in the list.
Verify the list of registered resource types.
# clrt list SUNW.LogicalHostname:4 SUNW.SharedAddress:2 SUNW.gds:6
If SUNW.gds
is not listed, register it.
# clrt register SUNW.gds
Confirm with clrt list
.