Skip Headers
StorageTek Automated Cartridge System Library Software High Availability 8.3 Cluster Installation, Configuration, and Operation
Release 8.3
E51939-02
  Go To Table Of Contents
Contents
Go To Index
Index

Previous
Previous
 
Next
Next
 

6 Installing Solaris Cluster 4.1

Solaris Cluster Installation is covered in detail in the Oracle Solaris Cluster Software Installation Guide, available from the Oracle technetwork site (see "Downloading Software Packages" of this document).

ACSLSHA 8.3 is supported on Solaris 11 with Oracle Solaris Cluster 4.1 and Support Repository Update (SRU-4) for OSC-4.1.

Cluster Package Installation

In this procedure, you install Cluster software.

  1. Create a directory, /opt/OSC.

    # mkdir /opt/OSC
    
  2. Move the downloaded Cluster 4.1 iso image (osc-4_1-ga-repo-full.iso) to the /opt/OSC directory.

  3. Move the downloaded SRU-4 OSC 4.1 patch image to the /opt/OSC directory and unzip the file.

  4. Copy the read-only Cluster iso image to a read/write file system.

    1. Create a pseudo device from the iso image:

      # /usr/sbin/lofiadm -a /opt/OSC/osc-4_1-ga-repo-full.iso
      

      Observe the device path that is returned and use it in step c.

    2. Create a mount point:

      # mkdir /opt/OSC/mnt
      
    3. Mount the pseudo device to the mount point:

      # mount -F hsfs -o ro /dev/lofi/1 /opt/OSC/mnt
      
    4. Copy the iso image to a temporary read/write file system.

      # mkdir /opt/OSC/merged_iso
      # cp -r /opt/OSC/mnt/repo /opt/OSC/merged_iso
      
  5. Mount the SRU-4 ISO image to a file system.

    # mount -F hsfs /opt/OSC/osc-4_1_4-1-repo-incr.iso /mnt
    
  6. Merge changes in SRU-4 to the base Cluster 4.1 Release.

    1. Sync the two iso images to the temporary file system.

      # rsync -aP /mnt/repo /opt/OSC/merged_iso
      
    2. Rebuild the search indexes for the repository.

      # pkgrepo rebuild -s /opt/OSC/merged_iso/repo
      
  7. Install Solaris Cluster from the patched iso image

    # pkg set-publisher -g file:///opt/OSC/merged_iso/repo ha-cluster
    # pkg install ha-cluster-full
    
  8. Repeat steps 1-7 on the adjacent node.

The scinstall Routine

The Solaris Cluster installation routine make a series of checks between the two nodes to assure that it can monitor system operation from both servers and can control startup and failover actions.

Preliminary Steps:

  1. Before running scinstall, it is helpful to establish an environment for root which includes the path to the cluster utilities that have just been installed. Edit the file /root/.profile. Change the path statement to include /usr/cluster/bin.

    export PATH=/usr/cluster/bin:/usr/bin:/usr/sbin
    

    Be sure to make this change on each node. To inherit the new path, you can log out and log back in, or simply su -.

  2. Confirm that the config/local_only property for rpc/bind is false

    # svccfg -s network/rpc/bind listprop config/local_only
    

    If this property returns true, then you must set it to false.

    # svccfg -s network/rpc/bind setprop config/local_only=false
    

    Now confirm:

    # svccfg -s network/rpc/bind listprop config/local_only
    
  3. An essential hardware setup requirement for Cluster software is the existence of two private network connections, reserved to assure uninterrupted communication for cluster operation between the two nodes. Figure 2-1, "Single HBCr Library Interface Card Connected to Two Ethernet Ports on each Server Node" shows these physical connections, labeled as (2). Each connection originates from a separate network adapter (NIC) to assure that no single point of failure can interrupt Cluster's internal communication. The scinstall routine checks each of the two connections to verify that no other network traffic is seen on the wire. Finally, scinstall verifies that communication is functional between the two lines. Once the physical connection is verified, the routine plumbs each interface to a private internal address beginning with 172.16.

    Before running scinstall, you should verify the assigned network device id for the two network ports on each server that you have set up for this private connection. Run dladm show-phys to view the interface assignments.

    # dladm show-phys
    
  4. A Logical Host Name and IP address must be established to represent the cluster from either node. This logical host will reliably respond to network communication whether the active host would be running from node1 or node2.

    Update the /etc/hosts file on both nodes to include the logical hostname and logical ip address. This host becomes active when you start ACSLS-HA ("Starting ACSLS HA").

  5. For a successful cluster installation, you must have the Solaris Common Agent Container enabled. Verify that the agent container is enabled.

    # cacaoadm status
    

    If the status response indicates that the agent container is DISABLED at system startup, then enable it as follows:

    # cacaoadm enable
    

Run scinstall

From one of the two nodes, run the command scinstall, and then follow this procedure:

  1. From the main menu, select Create a new cluster.

  2. From the sub menu, select Create a new cluster.

  3. Accept initial defaults.

  4. Select Typical install.

  5. Assign a name for the cluster, such as acsls_cluster.

  6. At the Cluster Nodes prompt, enter the hostname of the adjacent node. Accept the node list if it is correct

  7. Define the two private node interconnections you have identified for this purpose. Allow the install routine to plumb tcp links to the physical connections.

  8. Follow the prompts to create the cluster. Unless you have identified a specific device to serve as a quorum device, allow the scinstall routine to select the quorum device(s).

  9. Don't be alarmed if the utility reports that the cluster check failed on both nodes. A failure is reported even for minor warnings. You should review the report for each node, and look for any serious errors or violations that may be returned. The routine displays the path to a log file which reports details surrounding any errors or warnings encountered during the operation. Review the log file and correct any severe or moderately severe problems that were identified.

Verify Cluster Configuration

  1. Verify that both nodes are included in the cluster.

    # clnode list -v
    Node                Type
    ----                ----
    node1               cluster
    node2               cluster
    
  2. View the list of devices available to Solaris Cluster.

    # cldevice list -v
    DID Device   Full Device Path
    d1           node1:/dev/rdsk/c0t600A0B800049EDD600000C9952CAA03Ed0
    d1           node2:/dev/rdsk/c0t600A0B800049EDD600000C9952CAA03Ed0
    d2           node1:/dev/rdsk/c0t600A0B800049EE1A0000832652CAA899d0
    d2           node2:/dev/rdsk/c0t600A0B800049EE1A0000832652CAA899d0
    d3           node1:/dev/rdsk/c1t0d0
    d4           node1:/dev/rdsk/c1t1d0
    d5           node2:/dev/rdsk/c1t0d0
    d6           node2:/dev/rdsk/c1t1d0
    

    In this example, the shared disk devices are d1 and d2 while d3 and d4 are the node1 boot devices and d5 and d6 are the node2 boot devices. Notice that d1 and d2 are accessible from either node.

  3. A quorum consists of three or more devices. It is used during startup events to determine which node is to become the active node.

    Confirm that a full quorum has been configured.

    # clquorum list -v
    Quorum              Type
    ------              ----
    d1                  shared_disk
    node1               node
    node2               node
    

    You can (optionally) add the second shared_disk to the list of quorum devices.

    # clquorum add d2
    # clquorum list -v
    Quorum              Type
    ------              ----
    d1                  shared_disk
    d2                  shared_disk
    node1               node
    node2               node
    

    If the shared disk devices are not listed, then you must determine their device id's and then add them to the quorum.

    1. Identify the device id for each shared disk.

      # cldevice list -v
      
    2. Run clsetup to add the quorum devices.

      # clsetup
      
      Select '1' for quorum.
      Select '1' to dd a quorum device.
      Select 'yes' to continue.
      Select 'Directly attached shared disk'
      Select 'yes' to continue.
      Enter the device id (d<n>) for the first shared drive.
      Answer 'yes' to add another quorum device.
      Enter the device id for the second shared drive.
      
    3. Run clquorum show to confirm the quorum membership.

      # clquorum show
      
  4. Review overall cluster configuration.

    # cluster check -v | egrep -v "not applicable|passed"
    

    Look for any violated instances in the list.

  5. Verify the list of registered resource types.

    # clrt list
    SUNW.LogicalHostname:4
    SUNW.SharedAddress:2
    SUNW.gds:6
    

    If SUNW.gds is not listed, register it.

    # clrt register SUNW.gds
    

    Confirm with clrt list.