Sun Cluster 2.2 Software Installation Guide

Chapter 3 Installing and Configuring Sun Cluster Software

This chapter contains guidelines and procedures for installing Sun Cluster 2.2 and includes the following sections:

Installation Overview

This chapter includes the procedures used to install and configure Sun Cluster 2.2.

Before beginning the install procedures, complete the planning exercises described in Chapter 2, Planning the Configuration. These exercises include planning your network connections, logical hosts, disk configuration, and file system layouts. Complete the installation worksheets in Appendix A, Configuration Worksheets and Examples. You will be prompted for information from the worksheets during the Sun Cluster 2.2 installation process. Then use the procedures in this chapter to install and configure the cluster.

The steps to configure and install Sun Cluster are grouped into three procedures:

  1. Preparing the administrative workstation and installing the client software.

    This entails installing the Solaris operating environment and Sun Cluster 2.2 client software on the administrative workstation.

  2. Installing the server software.

    This includes: using the Cluster Console to install the Solaris operating environment and Sun Cluster 2.2 software on all cluster nodes; using scinstall(1M) to set up network interfaces, logical hosts, and quorum devices; and selecting data services and volume manager support packages.

  3. Configuring and bringing up the cluster.

    This includes: setting up paths; installing patches; installing and configuring your volume manager, SCI, PNM backup groups, logical hosts, and data services; and bringing up the cluster.

If your installation is interrupted or if you make mistakes during any part of the install process, see "Troubleshooting the Installation", for instructions on troubleshooting and restarting the install.

Installation Procedures

This section describes how to install the Solaris operating environment and Sun Cluster client software on the administrative workstation.

How to Prepare the Administrative Workstation and Install the Client Software

After you have installed and configured the hardware, terminal concentrator, and administrative workstation, use this procedure to prepare for Sun Cluster 2.2 Installation. See Chapter 2, Planning the Configuration, and complete the installation worksheets in Appendix A, Configuration Worksheets and Examples, before beginning this procedure.


Note -

Use of an administrative workstation is not required. If you do not use an administrative workstation, perform the administrative tasks from one designated node in the cluster.


The high-level steps to prepare the administrative workstation and install the client software are:

These are the detailed steps to prepare the administrative workstation and install the client software.

  1. Install the Solaris 2.6, Solaris 7, or Solaris 8 operating environment on the administrative workstation.

    All platforms except the E10000 require at least the Entire Distribution Solaris installation. E10000 systems require the Entire Distribution + OEM.

    You can use the following command to verify that the distribution loaded:


    # cat /var/sadm/system/admin/CLUSTER
    

    For details, see "Planning Your Solaris Operating Environment Installation", and your Solaris advanced system administration documentation.


    Caution - Caution -

    If you install anything less than the Entire Distribution Solaris software set on all nodes, plus the OEM packages for E10000 platforms, your cluster might not be supported by Sun.


  2. Install Solaris patches.

    Check the patch database or contact your local service provider for any hardware or software patches required to run the Solaris operating environment, Sun Cluster 2.2, or your volume management software.

    Install the patches by following the instructions in the README file accompanying each patch. Reboot the workstation if specified in the patch instructions.

  3. For convenience, add the tools directory /opt/SUNWcluster/bin to the PATH on the administrative workstation.

  4. Load the Sun Cluster 2.2 CD-ROM on the administrative workstation.

  5. Use scinstall(1M) to install the client packages on the administrative workstation.

    1. As root, invoke scinstall(1M).


      # cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
      # ./scinstall
       
      Installing: SUNWscins
       
      Installation of <SUNWscins> was successful.
       
      			Checking on installed package state
      .............
       
      None of the Sun Cluster software has been installed
       
      			<<Press return to continue>>

    2. Select the client package set.


      ==== Install/Upgrade Framework Selection Menu =====================
      Upgrade to the latest Sun Cluster Server packages or select package sets for installation. The list of package sets depends on the Sun Cluster packages that are currently installed.
       
      Choose one:
      1) Upgrade            	Upgrade to Sun Cluster 2.2 Server packages
      2) Server             	Install the Sun Cluster packages needed on a server
      3) Client             	Install the admin tools needed on an admin workstation
      4) Server and Client  	Install both Client and Server packages
       
      5) Close              	Exit this Menu
      6) Quit               	Quit the Program
       
      Enter the number of the package set [6]:  3
      

    3. Choose an install path for the client packages.

      Normally the default location is acceptable.


      What is the path to the CD-ROM image [/cdrom/cdrom0]: /cdrom/multi_suncluster_sc_2_2
      

    4. Install the client packages.

      Specify automatic installation.


      Installing Client packages
       
      Installing the following packages: SUNWscch SUNWccon SUNWccp SUNWcsnmp SUNWscsdb
       
                  >>>> Warning <<<<
        The installation process will run several scripts as root.  In
        addition, it may install setUID programs.  If you choose automatic
        mode, the installation of the chosen packages will proceed without
        any user interaction.  If you wish to manually control the install
        process you must choose the manual installation option.
       
      Choices:
      	manual						Interactively install each package
      	automatic						Install the selected packages with no user interaction.
       
      In addition, the following commands are supported:
         list						Show a list of the packages to be installed
         help						Show this command summary
         close						Return to previous menu
         quit						Quit the program 
       
      Install mode [manual automatic] [automatic]:  automatic
      

      The scinstall(1M) program now installs the client packages. After the packages have been installed, the main scinstall(1M) menu is displayed. From the main menu, you can choose to verify the installation, then quit to exit scinstall(1M).

  6. (Solaris 2.6 and 7 only) On the administrative workstation, use install_scpatches to install Sun Cluster patches from the Sun Cluster product CD-ROM.

    Use the install_scpatches utility to install Sun Cluster patches from the Sun Cluster CD-ROM.


    # cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Patches
    # install_scpatches
    

  7. On the administrative workstation, install Sun Cluster patches.

    Besides the patches installed in Step 6, obtain any additional required or recommended patches from your service provider or from the patches website, http://sunsolve.sun.com. Follow the instructions in the patch README files to install the patches.

  8. Change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond).

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described in the SNMP appendix to the Sun Cluster 2.2 System Administration Guide. You must stop and restart both the snmpd and smond daemons after changing the port number.

  9. Modify the /etc/clusters and /etc/serialports files.

    These files are installed automatically by scinstall(1M). Use the templates included in the files to add your cluster name, physical host names, terminal concentrator name, and serial port numbers, as listed on your installation worksheet. See the clusters(4) and serialports(4) man pages for details.


    Note -

    The serial port number used in the /etc/serialports file is the telnet(1) port number, not the physical port number. Determine the serial port number by adding 5000 to the physical port number. For example, if the physical port number is 6, the serial port number should be 5006.


Proceed to section "How to Install the Server Software" to install the Sun Cluster 2.2 server software.

How to Install the Server Software

After you have installed the Sun Cluster 2.2 client software on the administrative workstation, use this procedure to install Solaris and the Sun Cluster 2.2 server software on all cluster nodes.


Note -

This procedure assumes you are using an administrative workstation. If you are not, then connect directly to the console of each node using a telnet connection to the terminal concentrator. Install and configure the Sun Cluster software identically on each node.



Note -

For E10000 platforms, you must first log into the System Service Processor (SSP) and connect using the netcon command. Once connected, enter Shift~@ to unlock the console and gain write access.



Caution - Caution -

If you already have a volume manager installed and a root disk encapsulated, unencapsulate the root disk before beginning the Sun Cluster installation.


These are the high-level steps to install the server software:

These are the detailed steps to install the server software.

  1. Bring up the Cluster Control Panel from the administrative workstation.

    In this example, the cluster name is sc-cluster.


    # ccp sc-cluster
    

    The Cluster Control Panel appears.

    Graphic
  2. Start the Cluster Console in console mode.

    From the Cluster Control Panel, select the Cluster Console, console mode, and then File/Open. The Cluster Console (CC) will display one window for each cluster node, plus a small common window that you can use to command all windows simultaneously. Graphic


    Note -

    Individually, the windows act as vt100 terminal windows. Set your TERM type to equal vt100.


  3. Use the Cluster Console common window to install Solaris on all nodes.

    For details, see your Solaris advanced system administration documentation and the Solaris installation guidelines described in Chapter 2, Planning the Configuration.

    1. Partition the local disks on each node to Sun Cluster and volume manager guidelines.

      For partitioning guidelines, see "Planning Your Solaris Operating Environment Installation".

    2. Configure the OpenBoot PROM.

      If you want to boot from a SPARCstorage Array, you must configure the shared boot device, if you did not do so already during hardware installation. See "Booting From a SPARCstorage Array", for details about setting up the shared boot device. If your configuration includes copper-connected SCSI storage devices such as Sun StorEdge MultiPacks, Sun StorEdge A1000s, and Sun StorEdge A3x00s, you also need to configure the scsi-initiator-id. See Chapter 4 in the Sun Cluster 2.2 Hardware Site Preparation, Planning, and Installation Guide for details about configuring the scsi-initiator-id.

  4. Update the naming service.

    If a host name database such as NIS, NIS+, or DNS is used at your site, update the naming service with all logical and physical host names to be used in the Sun Cluster configuration.

  5. Use the Cluster Console common window to log into all nodes.

  6. Install Solaris patches.

    Check the patch database or contact your local service provider for any hardware or software patches required to run the Solaris operating environment, Sun Cluster 2.2, and any other software installed on your configuration.

    Install any required patches by following the instructions in the README file accompanying each patch, unless instructed otherwise by the Sun Cluster documentation or your service provider.

    Reboot all nodes if specified in the patch instructions.

  7. Modify the /etc/nsswitch.conf file.

    Ensure that "hosts," "services," and "group" lookups are directed to files first. For example:


    hosts: files nisplus
    services: files nisplus
    group: files nisplus

  8. (Optional) If your cluster serves more than one subnet, configure network adapter interfaces for additional secondary public networks.

  9. As root, invoke scinstall(1M) from the CC common window.


    # cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    # ./scinstall
     
    Installing: SUNWscins
     
    Installation of <SUNWscins> was successful.
     
    			Checking on installed package state............
     
    			<<Press return to continue>>

  10. Select the server package set.


    ==== Install/Upgrade Framework Selection Menu ==========================
    You can upgrade to the latest Sun Cluster packages or select package
    sets for installation, depending on the current state of installation.
     
    Choose one:
    1) Upgrade									Upgrade to Sun Cluster 2.2
    2) Server									All of the Sun Cluster packages needed on a server
    3) Client									All of the admin tools needed on an admin workstation
    4) Server and Client									All of the Client and Server packages
     
    5) Close									Exit this Menu
    6) Quit									Quit the Program
     
    Enter the number of the package set [6]:  2
    

    Press Return to continue.

  11. Install the server packages.

    Specify automatic installation. The scinstall(1M) program installs the server packages.


    Installing Server packages
     
    Installing the following packages: SUNWsclb SUNWsc SUNWccd SUNWcmm SUNWff SUNWmond SUNWpnm SUNWscman SUNWsccf SUNWscmgr
     
                >>>> Warning <<<<
      The installation process will run several scripts as root.  In
      addition, it may install setUID programs.  If you choose automatic
      mode, the installation of the chosen packages will proceed without
      any user interaction.  If you wish to manually control the install
      process you must choose the manual installation option.
     
    Choices:
    	manual					Interactively install each package
    	automatic					Install the selected packages with no user interaction.
     
    In addition, the following commands are supported:
    	list					Show a list of the packages to be installed
    	help					Show this command summary
    	close					Return to previous menu
    	quit					Quit the program
    
    Install mode [manual automatic] [automatic]: automatic
    

    The server package set is now installed.

  12. Select your volume manager.

    In this example, Solstice DiskSuite is specified.


    Volume Manager Selection
     
    Please choose the Volume Manager that will be used
    on this node:
     
    1) Cluster Volume Manager (CVM)
    2) Sun StorEdge Volume Manager (SSVM)
    3) Solstice DiskSuite (SDS)
     
    Choose the Volume Manager: 3
     
    Installing Solstice DiskSuite support packages.
    	Installing "SUNWdid" ... done
    	Installing "SUNWmdm" ... done
     
         ---------WARNING---------
    Solstice DiskSuite (SDS) will need to be installed before the cluster can be started.
     
            <<Press return to continue>>


    Note -

    You will still have to install the volume manager software from the Solstice DiskSuite or VxVM media after you complete the cluster installation. This step installs only supporting software (such as drivers).



    Caution - Caution -

    If you perform upgrades or package removals with scinstall(1M), scinstall(1M) will not remove the SUNWdid package. Do NOT remove the SUNWdid package manually. Removing the package can cause loss of data.


  13. Specify the cluster name.


    What is the name of the cluster? sc-cluster
    

  14. Specify the number of potential nodes and active nodes in your cluster.

    You can specify up to four nodes. The active nodes are those nodes that you will physically connect and include in the cluster now. You must specify all potential nodes at this time; you will be asked for information such as node names and ethernet addresses. Later, you can change the status of nodes from potential to active by using the scconf(1M) command. See Chapter 3 of the Sun Cluster 2.2 System Administration Guide for details about changing the status of cluster nodes.


    Note -

    If you want to add a node later that was not already specified as a potential node, you will have to reconfigure the entire cluster.


    How many potential nodes will sc-cluster have [4]? 3
     
    How many of the initially configured nodes will be active [3]? 3
    



    Note -

    If your cluster will have two active nodes and only two disk strings and your volume manager is Solstice DiskSuite, you must configure mediators. Do so after you configure Solstice DiskSuite but before you bring up the cluster. See the dual-string mediators chapter in the Sun Cluster 2.2 System Administration Guide for details.


  15. Configure the private network interfaces, using the common window.

    Select either Ethernet or Scalable Coherent Interface (SCI).


    What type of network interface will be used for this configuration? 
    
    (ether|SCI) [SCI]?

    If you choose SCI, the following screen is displayed. Answer the questions using the information on your installation worksheet. Note that the node name field is case-sensitive; the node names specified here are checked against the /etc/nodename file by scinstall.


    What is the hostname of node 0 [node0]? phys-hahost1
     
    What is the hostname of node 1 [node1]? phys-hahost2
    ...


    Note -

    When nodes are connected through an SCI switch, the connection of the nodes to the switch port determines the order of the nodes in the cluster. The node number must correspond to the port number. For example, if a node named phys-hahost1 is connected to port 0, then phys-hahost1 must be node 0. In addition, each node must be connected to the same port on each switch. For example, if phys-hahost1 is connected to port 0 on switch 0, it also must be connected to port 0 on switch 1.


    If you choose Ethernet, the following screen is displayed. Answer the questions using information from the installation worksheet. Complete the network configuration for all nodes in the cluster.


    What is the hostname of node 0 [node0]? phys-hahost1
     
    What is phys-hahost1's first private network interface [hme0]? hme0
     
    What is phys-hahost1's second private network interface [hme1]? hme1
     
    You will now be prompted for Ethernet addresses of
    the host. There is only one Ethernet address for each host 
    regardless of the number of interfaces a host has. You can get 
    this information in one of several ways:
     
    1. use the 'banner' command at the ok prompt,
    2. use the 'ifconfig -a' command (need to be root),
    3. use ping, arp and grep commands. ('ping exxon; arp -a | grep exxon')
     
    Ethernet addresses are given as six hexadecimal bytes separated by colons.
    ie, 01:23:45:67:89:ab
     
    What is phys-hahost1's ethernet address? 01:23:45:67:89:ab
     
    What is the hostname of node 1 [node1]? 
    ...

  16. Specify whether the cluster will support any data services and if so, whether to set up logical hosts.


    Will this cluster support any HA data services (yes/no) [yes]? yes
    Okay to set up the logical hosts for those HA services now (yes/no) [yes]? yes
    

  17. Set up primary public networks and subnets.

    Enter the name of the network controller for the primary network for each node in the cluster.


    What is the primary public network controller for "phys-hahost1"?  hme2
    What is the primary public network controller for "phys-hahost2"?  hme2
    

  18. Set up secondary public subnets.

    If the cluster nodes will provide data services to more than a single public network, answer yes to this question:


    Does the cluster serve any secondary public subnets (yes/no) [no]?  yes
    

  19. Name the secondary public subnets.

    Assign a name to each subnet. Note that these names are used only for convenience during configuration. They are not stored in the configuration database and need not match the network names returned by networks(4).


    Please enter a unique name for each of these additional subnets:
     
            Subnet name (^D to finish):  sc-cluster-net1
            Subnet name (^D to finish):  sc-cluster-net2
            Subnet name (^D to finish):  ^D
     
    The list of secondary public subnets is:
     
            sc-cluster-net1
            sc-cluster-net2
     
    Is this list correct (yes/no) [yes]?

  20. Specify network controllers for the subnets.

    For each secondary subnet, specify the name of the network controller used on each cluster node.


    For subnet "sc-cluster-net1" ...
            What network controller is used for "phys-hahost1"?  qe0
            What network controller is used for "phys-hahost2"?  qe0
     
    For subnet "sc-cluster-net2" ...
            What network controller is used for "phys-hahost1"?  qe1
            What network controller is used for "phys-hahost2"?  qe1
    

  21. Initialize Network Adapter Failover (NAFO).

    You must initialize NAFO, and you must run pnmset(1M) later to configure the adapters. See the pnmset(1M) man page and the network administration chapter in the Sun Cluster 2.2 System Administration Guide for more information about NAFO and PNM.


    Initialize NAFO on "phys-hahost1" with one ctlr per group (yes/no) [yes]?  y
    

  22. Set up logical hosts.


    Enter the list of logical hosts you want to add:
     
            Logical host (^D to finish):  hahost1
            Logical host (^D to finish):  hahost2
            Logical host (^D to finish):  ^D
     
    The list of logical hosts is:
     
            hahost1
            hahost2
     
    Is this list correct (yes/no) [yes]? y
    


    Note -

    You can add logical hosts or change the logical host configuration after the cluster is up by using scconf(1M) or the "Change" option to scinstall(1M). See the scinstall(1M) and scconf(1M) man pages, and Step 12 in the procedure "How to Configure the Cluster" for more information.



    Note -

    If you will be using the Sun Cluster HA for SAP data service, do not set up logical hosts now. Set them up with scconf(1M) after the cluster is up. See the scconf(1M) man page and Chapter 10, Installing and Configuring Sun Cluster HA for SAP, for more information.



    Note -

    After configuring logical hosts, you might want to use the scconf clustername -l command to set the timeout values for the logical host. The timeout values are site-dependent; they are tied to the number of logical hosts, spindles, and file systems. For procedures for setting timeout values, refer to Chapter 3 in the Sun Cluster 2.2 System Administration Guide for the procedure to configure timeouts for cluster transition steps. Refer also to the scconf(1M) man page.


  23. Assign default masters to logical hosts.

    You must specify the name of a physical host in the cluster as a default master for each logical host.


    What is the name of the default master for "hahost1"?  phys-hahost1
    

    Specify the host names of other physical hosts capable of mastering each logical host.


    Enter a list of other nodes capable of mastering "hahost1":
     
            Node name:  phys-hahost2
            Node name (^D to finish): ^D
     
    The list that you entered is:
     
            phys-hahost1
            phys-hahost2
     
    Is this list correct (yes/no) [yes]? yes
    

  24. Decide whether to enable automatic failback.

    Answering yes enables the logical host to fail back automatically to its default master when the default master rejoins the cluster. Answering no prevents the logical host from switching back to the original master when it rejoins the cluster--instead, the logical host remains on the node to which it was transferred when its default master went down.


    Enable automatic failback for "hahost1" (yes/no) [no]?

  25. Assign net names and disk group names.


    What is the net name for "hahost1" on subnet "sc-cluster-net1"? hahost1-pub1
    What is the net name for "hahost1" on subnet "sc-cluster-net2"? hahost1-pub2
    Disk group name for logical host "hahost1" [hahost1]? 
    Is it okay to add logical host "hahost1" now (yes/no) [yes]? yes
     
    What is the name of the default master for "hahost2"?
    ...

    Continue until all logical hosts are set up.


    Note -

    To set up multiple disk groups on a single logical host, use the scconf(1M) command after you have used scinstall(1M) to configure and bring up the cluster. See the scconf(1M) man page for details.


  26. If your volume manager is VxVM and there are more than two nodes in the cluster, configure failure fencing.

    This screen will appear only for greater than two-node clusters using VxVM.


    Configuring Failure Fencing
     
    What type of architecture does phys-hahost1 have (E10000|other) [other]? 
     
    What is the name of the Terminal Concentrator connected to the serial port of
    phys-hahost1 [NO_NAME]? sc-tc
    
    Is 123.456.789.1 the correct IP address for this Terminal Concentrator (yes |
    no) [yes]? 
     
    What is the password for root of the Terminal Concentrator [?] 
    Please enter the password for root again [?] 
     
    Which physical port on the Terminal Concentrator is phys-hahost1 connected to:
     
    What type of architecture does phys-hahost2 have (E10000|other) [other]? 
    Which Terminal Concentrator is phys-hahost2 connected to:
     
    0) sc-tc				123.456.789.1
    1) Create A New Terminal Concentrator Entry
     
    Select a device: 
     
    Which physical port on the Terminal Concentrator is phys-hahost2 connected to: 
     
    What type of architecture does phys-hahost3 have (E10000|other) [other]? 
    Which Terminal Concentrator is phys-hahost3 connected to:
     
    0) sc-tc				123.456.789.1
    1) Create A New Terminal Concentrator Entry
     
    Select a device:
     
    Which physical port on the Terminal Concentrator is phys-hahost3 connected to:
     
    Finished Configuring Failure Fencing


    Caution - Caution -

    The SSP password is used in failure fencing. Failure to correctly set the SSP password might cause unpredictable results in the event of a node failure. If you change the SSP password, you must change it on the cluster as well, using scconf(1M). Otherwise, failure fencing will be disabled because the SSP cannot connect to the failed node. See the scconf(1M) man page and the Sun Cluster 2.2 System Administration Guide for details about changing the SSP password.


  27. If your volume manager is VxVM, your cluster has more than two nodes, and you have a direct-attached device, select a nodelock port.

    The port you select must be on a terminal concentrator attached to a node in the cluster.


    Does the cluster have a disk storage device that is connected to all nodes in the cluster [no]? yes
     
    Which unused physical port on the Terminal Concentrator is to be used for node
    locking:

  28. If your volume manager is VxVM, select quorum devices.

    If your volume manager is VxVM, you are prompted to select quorum devices. The screen display varies according to your cluster topology. Select a device from the list presented. This example shows a two-node cluster.


    Getting device information for reachable nodes in the cluster.
    This may take a few seconds to a few minutes...done
    Select quorum device for the following nodes:
    0 (phys-hahost1) 
    and
    1 (phys-hahost2)
     
    1) SSA:000000779A16
    2) SSA:000000741430
    3) DISK:c0t1d0s2:01799413
    Quorum device: 1
    ...
    SSA with WWN 000000779A16 has been chosen as the quorum device.
     
    Finished Quorum Selection

  29. If your cluster has greater than two nodes, select Cluster Membership Monitor behavior.


    In the event that the cluster is partitioned into two or more subsets of nodes, the Cluster Membership Monitor may request input from the operator as to how it should proceed (abort or form a cluster) within each subset.  The Cluster Membership Monitor can be configured to make a policy-dependent automatic selection of a subset to become the next reconfiguration of the cluster.
     
    In case the cluster partitions into subsets, which subset should stay up?
        ask)    the system will always ask the operator.
        select) automatic selection of which subset should stay up.
     
    Please enter your choice (ask|select) [ask]: 

    If you choose "select," you are asked to choose between two policies:


    Please enter your choice (ask|select) [ask]: select
    
    You have a choice of two policies:  
     
    	lowest -- The subset containing the node with the lowest node ID value
    					automatically becomes the new cluster.  All other subsets must be 
    				manually aborted.
     
    	highest -- The subset containing the node with the highest node ID value
    				automatically becomes the new cluster.  All other subsets must be 
    				manually aborted.
     
    Select the selection policy for handling partitions (lowest|highest) [lowest]:

    The scinstall(1M) program now finishes installing the Sun Cluster 2.2 server packages.


    Installing ethernet Network Interface packages.
     
    Installing the following packages: SUNWsma
        Installing "SUNWsma" ... done
     
            Checking on installed package state.........

  30. Select your data services.

    Note that Sun Cluster HA for NFS and Informix-Online XPS are installed automatically with the Server package set.


    ==== Select Data Services Menu ==========================
     
    Please select which of the following data services are to be installed onto this cluster.  Select singly, or in a space separated list.
    Note: HA-NFS and Informix Parallel Server (XPS) are installed automatically with the Server Framework.
     
    You may de-select a data service by selecting it a second time.
     
    Select DONE when finished selecting the configuration.
     
    				1) Sun Cluster HA for Oracle
    				2) Sun Cluster HA for Informix
    				3) Sun Cluster HA for Sybase
    				4) Sun Cluster HA for Netscape
    				5) Sun Cluster HA for Netscape LDAP
    				6) Sun Cluster HA for Lotus
    				7) Sun Cluster HA for Tivoli
    				8) Sun Cluster HA for SAP
    				9) Sun Cluster HA for DNS
    				10) Sun Cluster for Oracle Parallel Server
    				11) Sun Cluster HA for NetBackup
     
    INSTALL				12) No Data Services
    				13) DONE
     
    Choose a data service: 3
    
    What is the path to the CD-ROM image [/cdrom/multi_suncluster_sc_2_2]:
    
    Install mode [manual automatic] [automatic]:  automatic
    ...
    Select DONE when finished selecting the configuration.
    ...


    Note -

    Oracle Parallel Server will not appear unless you have the VERITAS Volume Manager cluster feature installed. OPS will not run with non-cluster-aware VERITAS Volume Manager or Solstice DiskSuite.


  31. Quit scinstall(1M).


    ============ Main Menu =================
     
    1) Install/Upgrade - Install or Upgrade Server Packages or Install Client 
    								Packages.
    2) Remove  - Remove Server or Client Packages.
    3) Change  - Modify cluster or data service configuration
    4) Verify  - Verify installed package sets.
    5) List    - List installed package sets.
     
    6) Quit    - Quit this program.
    7) Help    - The help screen for this menu.
     
    Please choose one of the menu items: [6]:  6
    ...
    

    The scinstall(1M) program now verifies installation of the packages you selected.


    ==== Verify Package Installation ==========================
    Installation
    	All of the install										packages have been installed
    Framework
    	All of the client										packages have been installed
    	All of the server										packages have been installed
    Communications
    	All of the SMA										packages have been installed
    Data Services
    	None of the Sun Cluster HA for Oracle packages have been installed
    	None of the Sun Cluster HA for Informix packages have been installed
    	None of the Sun Cluster HA for Sybase packages have been installed
    	None of the Sun Cluster HA for Netscape packages have been installed
    ...#

  32. (Solaris 2.6 and 7 only) On all nodes, use install_scpatches to install Sun Cluster patches from the Sun Cluster product CD-ROM.

    Use the install_scpatches utility to install Sun Cluster patches from the Sun Cluster CD-ROM.


    # cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Patches
    # install_scpatches
    

  33. On all nodes, install any required or recommended Sun Cluster patches.

    Besides those patches installed in Step 32, also obtain any required or recommended patches from your service provider or from the patches website, http://sunsolve.sun.com. Follow the instructions in the patch README files to install the patches.

Now proceed to the section "How to Configure the Cluster" to configure the cluster.

How to Configure the Cluster

After installing the Sun Cluster 2.2 client and server packages, complete the following post-installation tasks.

This is the high-level list of steps to perform to configure the cluster:

These are the detailed steps to configure the cluster.

  1. Set up the software directory paths on all nodes.

    1. On all nodes, set your PATH to include /sbin, /usr/sbin, /opt/SUNWcluster/bin, and /opt/SUNWpnm/bin. Set your MANPATH to include /opt/SUNWcluster/man.

    2. On all nodes, set your PATH and MANPATH to include the volume manager specific paths.

      For VERITAS Volume Manager 3.0.4, set your PATH to include /opt/VRTSvmsa/bin and /etc/vx/bin. Set your MANPATH to include /opt/VRTSvmman/man.

      For Solstice DiskSuite 4.2, set your PATH to include /usr/opt/SUNWmd/sbin and your MANPATH to include /usr/opt/SUNWmd/man.

      For Solstice DiskSuite 4.2.1, set your PATH to include /usr/sbin and your MANPATH to include /usr/man.

    3. If you are using Scalable Coherent Interface (SCI) for the private interfaces, set the SCI paths.

      Set your PATH to include /opt/SUNWsci/bin, /opt/SUNWscid/bin, and /opt/SUNWsma/bin. Set your MANPATH to include /opt/SUNWsma/man.

  2. Add IP addresses to the /.rhosts file.

    You must include the following hard-coded private network IP addresses in the /.rhosts files on all nodes. For a two node cluster, include only the addresses specified for nodes 0 and 1 below. For a three node cluster, include the addresses specified for nodes 0, 1, and 2 below. For a four node cluster, include all addresses noted below:


    # node 0
    204.152.65.33
    204.152.65.1
    204.152.65.17
     
    # node 1
    204.152.65.34
    204.152.65.2
    204.152.65.18
     
    # node 2
    204.152.65.35
    204.152.65.3
    204.152.65.19
     
    # node 3
    204.152.65.36
    204.152.65.4
    204.152.65.20


    Note -

    If you fail to include the private network IP addresses in /.rhosts, the hadsconfig(1M) script will be unable to automatically replicate data service configuration information to all nodes when you configure your data services. You will then need to replicate the configuration file manually as described in the hadsconfig(1M) man page.


  3. If you are using SCI for the private interfaces and if you specified any potential nodes during server software installation, modify the sm_config file.

    During server software installation with scinstall(1M), you specified active and potential nodes. Edit the sm_config file now to comment out the host names of the potential nodes by inserting the characters "_%" at the beginning of those host names. In this example sm_config file, phys-host1 and phys-host2 are the active nodes, and phys-host3 and phys-host4 are potential nodes to be added to the cluster later.


    HOST 0	= phys-host1
    HOST 1	= phys-host2
    HOST 2	= _%phys-host3
    HOST 3	= _%phys-host4

  4. If you are using SCI (SBus only) for the private interfaces, reboot all nodes.


    Note -

    If you are using SCI (PCI Bus) for the private interfaces, you do not need to reboot before running the sm_config(1M) command.


  5. If you are using SCI (SBus or PCI Bus), configure the switches with the sm_config(1M) command.

    Edit a copy of the sm_config template file (template.sc located in /opt/SUNWsma/bin/Examples) before running the sm_config(1M) command. See the sm_config(1M) man page and Chapter 6 in the Sun Cluster 2.2 System Administration Guide for the procedure to administer the switch management agent.


    Caution - Caution -

    Run the sm_config(1M) command on only one node.


    # sm_config -f templatefile
    


  6. Install Sun Cluster 2.2 patches.

    Check the patch database or contact your local service provider for any hardware or software patches required to run Sun Cluster 2.2.

    Install any required patches by following the instructions in the README file accompanying each patch.

  7. Reboot all nodes.

    This reboot creates device files for the Sun Cluster device drivers installed by scinstall(1M), and also might be required by some patches you installed in Step 6.


    Caution - Caution -

    You must reboot all nodes at this time, even if you did not install SCI or patches.


  8. (VxVM only) Install and configure VxVM.

    Install and configure your volume manager and volume manager patches, using your volume manager documentation.

    This process includes installing the volume manager and patches, creating plexes and volumes, setting up the HA administrative file system, and updating the vfstab.logicalhost files. Refer to Chapter 2, Planning the Configuration, and to Appendix C, Configuring VERITAS Volume Manager for details.

    Create and populate disk groups and volumes now, but release them before continuing.

  9. Configure NAFO backup groups, if you did not do so already.

    During initial installation, you can use the scinstall(1M) command to install the PNM package, SUNWpnm, to configure one controller per NAFO backup group, and to initialize PNM.


    Note -

    You must configure a public network adaptor with either scinstall(1M) or pnmset(1M), even if you have only one public network connection per node.


    Run the pnmset(1M) command now if you did not already use scinstall(1M) to configure controllers and initialize PNM, or if you want to assign more than one controller per NAFO backup group. The pnmset(1M) command runs as an interactive script.


    # /opt/SUNWpnm/bin/pnmset
    

    See the network administration chapter in the Sun Cluster 2.2 System Administration Guide or the pnmset(1M) man page for details.

  10. Start the cluster.


    Note -

    If you are using Solstice DiskSuite and you set up logical hosts as part of the server software installation (Step 22 of the procedure "How to Install the Server Software"), you will see error messages as you start the cluster and it attempts to bring the logical hosts online. The messages will indicate that the Solstice DiskSuite disksets have not been set up. You can safely ignore these messages as you will set up the disksets in Step 11.


    1. Run the following command on one node.


      # scadmin startcluster phys-hahost1 sc-cluster
      


      Note -

      If you are using VERITAS Volume Manager with the cluster feature (used with Oracle Parallel Server), you must set up shared disk groups at this point, before the other nodes are added to the cluster.


    2. Add all other nodes to the cluster by running the following command from each node being added.


      # scadmin startnode
      

    3. Verify that the cluster is running.

      From any cluster node, check activity with hastat(1M):


      # hastat
      

  11. (Solstice DiskSuite only) Install and configure Solstice DiskSuite.

    This process includes installing the volume manager and patches, creating disksets, setting up the HA administrative file system, and updating the vfstab.logicalhost files. Refer to Chapter 2, Planning the Configuration, and to Appendix B, Configuring Solstice DiskSuite, for details.

    Create and populate disk groups and volumes now, but release them before continuing.

    If you have a two-node configuration with only two disk strings, you also must set up mediators. Do so after configuring Solstice DiskSuite. See the dual-string mediators chapter in the Sun Cluster 2.2 System Administration Guide for instructions.

  12. Add logical hosts, if you did not do so already.

    Use the "Change" option to scinstall(1M) to add and configure logical hosts, if you did not set up all logical hosts during initial installation, or if you want to change the logical host configuration.

    To set up multiple disk groups on a single logical host, you must use the scconf(1M) command, after you have brought up the cluster. See the scconf(1M) man page for details.

    See Chapter 3 in the Sun Cluster 2.2 System Administration Guide for details about adding and removing logical hosts.


    Note -

    When you use scinstall(1M) to add logical hosts initially, you run the command from all hosts before the cluster has been brought up. When you use scinstall(1M) to re-configure existing logical hosts, you run the command from only one node while the cluster is up.


  13. Add logical host names to the /etc/hosts files on all nodes.

    For example:


    #
    # Internet host table
    #
    127.0.0.1	 	 		 	 	localhost
    123.168.65.23	 	 phys-hahost1      loghost
    123.146.84.36	 	 123.146.84.36
    123.168.65.21	 	 hahost1
    123.168.65.22	 	 hahost2

  14. Bring the logical hosts on line.

    Use haswitch(1M) to force a cluster reconfiguration that will cause all logical hosts to be mastered by their default masters.


    # haswitch -r
    

  15. (Optional) If your cluster has only two nodes and your volume manager is VxVM, configure the shared CCD volume.

    Use the procedures described in Appendix C, Configuring VERITAS Volume Manager, to configure a shared CCD volume.

  16. Configure and activate the HA data services.

    See the relevant data service chapter in this book, and the specific data service documentation for details.

  17. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see Chapter 2 of the Sun Cluster 2.2 System Administration Guide.

This completes the cluster configuration.

Troubleshooting the Installation

Table 3-1 describes some common installation problems and solutions.

Table 3-1 Common Sun Cluster Installation Problems and Solutions

Problem Description 

Solution 

When you start a cluster node, it cannot join the cluster because the private net is not configured correctly. 

Specify the correct private net interface by running the scconf(1M) command with the -i option. Then restart the cluster.

When you start a cluster node, it aborts after a failed reservation attempt, because of an incorrectly specified Ethernet address for one of the private nets. 

Specify the correct Ethernet address of the node by running the scconf(1M) command with the -N option. Then restart the cluster.

If the cluster contains an invalid quorum device, the first node is unable to join the cluster because it cannot reserve the quorum device. 

Specify a valid quorum device (controller or disk) by running the scconf(1M) command with the -q option. After configuring a valid quorum device, restart the cluster.

When you try to start the cluster, one node aborts after receiving signals from node 0 to do so. 

The problem might be mismatched CDB files (/etc/opt/SUNWcluster/conf/clustername.cdb). Compare the CDB files on the different nodes using cksum. If they differ, copy the CDB file from the working node to the other node(s). You also might need to copy over the ccd.database.init file from the working node to the other nodes.

Recovering From an Aborted Installation

If your scinstall(1M) session did not run to completion during either the client or server installation process, you can re-run scinstall(1M) after cleaning up the environment using this procedure.

How to Recover From an Aborted Client Installation
  1. On the administrative workstation, save the /etc/serialports and /etc/clusters files to a safe location, to be restored later.

  2. On the administrative workstation, use pkgrm to remove the client packages.

  3. Use scinstall(1M) to remove the Sun Cluster 2.2 client packages that have been installed already.


    # cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    # ./scinstall
    
    ============ Main Menu =================
     
    1) Install/Upgrade - Install or Upgrade Server Packages or Install Client Packages.
    2) Remove  - Remove Server or Client Packages.
    3) Change  - Modify cluster or data service configuration
    4) Verify  - Verify installed package sets.
    5) List    - List installed package sets.
     
    6) Quit    - Quit this program.
    7) Help    - The help screen for this menu.
     
    Please choose one of the menu items: [6]:  2
    

  4. Rerun scinstall(1M) using the procedure "How to Prepare the Administrative Workstation and Install the Client Software".

  5. Restore the /etc/serialports and /etc/clusters files you saved in Step 1.

How to Recover From an Aborted Server Installation
  1. If dfstab.logicalhost and vfstab.logicalhost files exist already, save them to a safe location to be restored later.

    Look for the files in /etc/opt/SUNWcluster/conf/hanfs. You will restore these files after re-running scinstall(1M) and configuring the cluster.

  2. Use scinstall(1M) to remove the Sun Cluster 2.2 server packages that have been installed already.


    # cd /cdrom/multi_suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    # ./scinstall
     
    ============ Main Menu =================
     
    1) Install/Upgrade - Install or Upgrade Server Packages or Install Client Packages.
    2) Remove  - Remove Server or Client Packages.
    3) Change  - Modify cluster or data service configuration
    4) Verify  - Verify installed package sets.
    5) List    - List installed package sets.
     
    6) Quit    - Quit this program.
    7) Help    - The help screen for this menu.
     
    Please choose one of the menu items: [6]:  2
    

  3. Manually remove the following Sun Cluster 2.2 directories and files from all nodes.


    Caution - Caution -

    The scinstall(1M) command will not remove the SUNWdid package. Do NOT remove the SUNWdid package manually. Removing the package can cause loss of data.


    Note that some of these directories might have been removed already by scinstall(1M).


    # rm /etc/pnmconfig
    # rm /etc/sci.ifconf
    # rm /etc/sma.config
    # rm /etc/sma.ip
    # rm -r /etc/opt/SUNWcluster
    # rm -r /etc/opt/SUNWpnm
    # rm -r /opt/SUNWcluster
    # rm -r /opt/SUNWpnm
    # rm -r /var/opt/SUNWcluster
    

  4. Restart scinstall(1M) to install Sun Cluster 2.2.

    Return to the procedure "How to Install the Server Software" and begin at Step 3.

  5. Configure the cluster.

    Use the procedure "How to Configure the Cluster".

  6. Restore the dfstab.logicalhost and vfstab.logicalhost files you saved in Step 1.

    Before starting the cluster, restore the dfstab.logicalhost and vfstab.logicalhost files to /etc/opt/SUNWcluster/conf/hanfs on all nodes.