Sun Cluster 2.2 Software Installation Guide

3.2.3 How to Configure the Cluster

After installing the Sun Cluster 2.2 client and server packages, complete the following post-installation tasks.

This is the high-level list of steps to perform to configure the cluster:

These are the detailed steps to configure the cluster.

  1. Set up the software directory paths on all nodes.

    1. On all nodes, set your PATH to include /sbin, /usr/sbin, /opt/SUNWcluster/bin, and /opt/SUNWpnm/bin. Set your MANPATH to include /opt/SUNWcluster/man.

    2. On all nodes, set your PATH and MANPATH to include the volume manager specific paths.

      For SSVM and CVM, set your PATH to include /opt/SUNWvxva/bin and /etc/vx/bin. Set your MANPATH to include /opt/SUNWvxva/man and /opt/SUNWvxvm/man.

      For Solstice DiskSuite, set your PATH to include /usr/opt/SUNWmd/sbin. Set your MANPATH to include /usr/opt/SUNWmd/man.

    3. If you are using Scalable Coherent Interface (SCI) for the private interfaces, set the SCI paths.

      Set your PATH to include /opt/SUNWsci/bin, /opt/SUNWscid/bin, and /opt/SUNWsma/bin. Set your MANPATH to include /opt/SUNWsma/man.

  2. Add IP addresses to the /.rhosts file.

    You must include the following hardcoded private network IP addresses in the /.rhosts files on all nodes. For a two node cluster, include only the addresses specified for nodes 0 and 1 below. For a three node cluster, include the addresses specified for nodes 0, 1, and 2 below. For a four node cluster, include all addresses noted below:

    # node 0
     204.152.65.33
     204.152.65.1
     204.152.65.17
    
     # node 1
     204.152.65.34
     204.152.65.2
     204.152.65.18
    
     # node 2
     204.152.65.35
     204.152.65.3
     204.152.65.19
    
     # node 3
     204.152.65.36
     204.152.65.4
     204.152.65.20

    Note -

    If you fail to include the private network IP addresses in /.rhosts, the hadsconfig(1M) script will be unable to automatically replicate data service configuration information to all nodes when you configure your data services. You will then need to replicate the configuration file manually as described in the hadsconfig(1M) man page.


  3. If you are using SCI for the private interfaces and if you specified any potential nodes during server software installation, modify the sm_config file.

    During server software installation with scinstall(1M), you specified active and potential nodes. Edit the sm_config file now to comment out the host names of the potential nodes, by prepending the characters "_%" to those host names. In this example sm_config file, phys-host1 and phys-host2 are the active nodes, and phys-host3 and phys-host4 are potential nodes to be added to the cluster later.

    HOST 0	= phys-host1
     HOST 1	= phys-host2
     HOST 2	= _%phys-host3
     HOST 3	= _%phys-host4
  4. If you are using SCI for the private interfaces, configure the switches with the sm_config(1M) command.

    You must edit a copy of the sm_config template file (template.sc located in /opt/SUNWsma/bin/Examples) before running the sm_config(1M) command. See the sm_config(1M) man page and the procedure describing how to add switches and SCI cards in the Sun Cluster 2.2 System Administration Guide for details.


    Caution - Caution -

    Run the sm_config(1M) command on only one node.


    # sm_config -f templatefile
    
  5. Install Sun Cluster 2.2 patches.

    Check the patch database or contact your local service provider for any hardware or software patches required to run Sun Cluster 2.2.

    Install any required patches by following the instructions in the README file accompanying each patch.

  6. Reboot all nodes.

    This reboot creates device files for the Sun Cluster device drivers installed by scinstall(1M), and also might be required by some patches you installed in Step 5.


    Caution - Caution -

    You must reboot all nodes at this time, even if you did not install SCI or patches.


  7. (SSVM or CVM only) Install and configure SSVM or CVM.

    Install and configure your volume manager and volume manager patches, using your volume manager documentation.

    This process includes installing the volume manager and patches, creating plexes and volumes, setting up the HA administrative file system (SSVM only), and updating the vfstab.logicalhost files (SSVM only). Refer to Chapter 2, Planning the Configuration, and to Appendix C, Configuring Sun StorEdge Volume Manager and Cluster Volume Manager, for details. For CVM, refer also to the section on installing Cluster Volume Manager in the Sun Cluster 2.2 Cluster Volume Manager Guide.

    Create and populate disk groups and volumes now, but release them before continuing.

  8. Configure NAFO backup groups, if you did not do so already.

    During initial installation, you can use the scinstall(1M) command to install the PNM package, SUNWpnm, to configure one controller per NAFO backup group and to initialize PNM.


    Note -

    You must configure a public network adaptor with either scinstall(1M) or pnmset(1M), even if you have only one public network connection per node.


    Run the pnmset(1M) command now if you did not already use scinstall(1M) to configure controllers and initialize PNM, or if you want to assign more than one controller per NAFO backup group. The pnmset(1M) command runs as an interactive script.

    # /opt/SUNWpnm/bin/pnmset
    

    See the chapter on administering network interfaces in the Sun Cluster 2.2 System Administration Guide or the pnmset(1M) man page for details.

  9. Start the cluster.


    Note -

    If you are using Solstice DiskSuite and you set up logical hosts as part of the server software installation (Step 22 of the procedure "3.2.2 How to Install the Server Software"), you will see error messages as you start the cluster and it attempts to bring the logical hosts online. The messages will indicate that the Solstice DiskSuite disksets have not been set up. You can safely ignore these messages as you will set up the disksets in Step 10.


    1. Run the following command on one node.

      # scadmin startcluster phys-hahost1 sc-cluster
      

      Note -

      If your volume manager is Cluster Volume Manager, you must set up shared disk groups at this point, before the other nodes are added to the cluster.


    2. Add all other nodes to the cluster by running the following command from each node being added.

      # scadmin startnode
      
    3. Verify that the cluster is running.

      From any cluster node, check activity with hastat(1M):

      # hastat
      
  10. (Solstice DiskSuite only) Install and configure Solstice DiskSuite.

    This process includes installing the volume manager and patches, creating disksets, setting up the HA administrative file system, and updating the vfstab.logicalhost files. Refer to Chapter 2, Planning the Configuration, and to Appendix B, Configuring Solstice DiskSuite, for details.

    Create and populate disk groups and volumes now, but release them before continuing.

    If you have a two-node configuration with only two disk strings, you also must set up mediators. Do so after configuring Solstice DiskSuite. See the chapter on using dual-string mediators in the Sun Cluster 2.2 System Administration Guide for instructions.

  11. Add logical hosts, if you did not do so already.

    Use the "Change" option to scinstall(1M) to add and configure logical hosts, if you did not set up all logical hosts during initial installation, or if you want to change the logical host configuration.

    To set up multiple disk groups on a single logical host, you must use the scconf(1M) command, after you have brought up the cluster. See the scconf(1M) man page for details.

    See the section on adding and removing logical hosts in the Sun Cluster 2.2 System Administration Guide, for more information.


    Note -

    When you use scinstall(1M) to add logical hosts initially, you run the command from all hosts before the cluster has been brought up. When you use scinstall(1M) to re-configure existing logical hosts, you run the command from only one node while the cluster is up.


  12. Add logical host names to the /etc/hosts files on all nodes.

    For example:

    #
     # Internet host table
     #
     127.0.0.1	 	 		 	 	localhost
     123.168.65.23	 	 phys-hahost1      loghost
     123.146.84.36	 	 123.146.84.36
     123.168.65.21	 	 hahost1
     123.168.65.22	 	 hahost2
  13. Bring the logical hosts on line.

    Use haswitch(1M) to force a cluster reconfiguration that will cause all logical hosts to be mastered by their default masters.

    # haswitch -r
    
  14. (Optional) If your cluster has only two nodes and your volume manager is SSVM, configure the shared CCD volume.

    Use the procedures described in Appendix C, Configuring Sun StorEdge Volume Manager and Cluster Volume Manager, to configure a shared CCD volume.

  15. Configure and activate the HA data services.

    See the relevant data service chapter in this book, and the specific data service documentation for details.

  16. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see the Sun Cluster 2.2 Release Notes and the section on monitoring the Sun Cluster servers with Sun Cluster Manager in the Sun Cluster 2.2 System Administration Guide.

    This completes the cluster configuration.