Sun Cluster 2.2 Software Installation Guide

3.2 Installation Procedures

This section describes how to install the Solaris operating environment and Sun Cluster client software on the administrative workstation.

3.2.1 How to Prepare the Administrative Workstation and Install the Client Software

After you have installed and configured the hardware, terminal concentrator, and administrative workstation, use this procedure to prepare for Sun Cluster 2.2 Installation. See Chapter 2, Planning the Configuration, and complete the installation worksheets in Appendix A, Configuration Worksheets and Examples, before beginning this procedure.


Note -

Use of an administrative workstation is not required. If you do not use an administrative workstation, perform the administrative tasks from one designated node in the cluster.


These are the high-level steps to prepare the administrative workstation and install the client software:

These are the detailed steps to prepare the administrative workstation and install the client software.

  1. Install the Solaris 2.6 or Solaris 7 operating environment on the administrative workstation.

    All platforms except the E10000 require at least the Entire Distribution Solaris installation, for both the Solaris 2.6 and Solaris 7 operating environments. E10000 systems require the Entire Distribution + OEM.

    You can use the following command to verify the distribution loaded:

    # cat /var/sadm/system/admin/CLUSTER
    

    For details, see "2.2.4 Planning Your Solaris Operating Environment Installation", and the Solaris Advanced System Administration Guide.


    Caution - Caution -

    If you install anything less than the Entire Distribution Solaris software set on all nodes, plus the OEM packages for E10000 platforms, your cluster might not be supported by Sun.


  2. Install Solaris patches.

    Check the patch database or contact your local service provider for any hardware or software patches required to run the Solaris operating environment, Sun Cluster 2.2, or your volume management software.

    Install the patches by following the instructions in the README file accompanying each patch. Reboot the workstation if specified in the patch instructions.

  3. For convenience, add the tools directory /opt/SUNWcluster/bin to the PATH on the administrative workstation.

  4. Load the Sun Cluster 2.2 CD-ROM on the administrative workstation.

  5. Use scinstall(1M) to install the client packages on the administrative workstation.

    1. As root, invoke scinstall(1M).

      # cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
      # ./scinstall
      
       Installing: SUNWscins
      
       Installation of <SUNWscins> was successful.
      
       			Checking on installed package state
       .............
      
       None of the Sun Cluster software has been installed
      
       			<<Press return to continue>>
    2. Select the client package set.

      ==== Install/Upgrade Framework Selection Menu =====================
       Upgrade to the latest Sun Cluster Server packages or select package 
      sets for installation. The list of package sets depends on the Sun 
      Cluster packages that are currently installed.
       
       Choose one:
       1) Upgrade            	Upgrade to Sun Cluster 2.2 Server packages
       2) Server             	Install the Sun Cluster packages needed on a server
       3) Client             	Install the admin tools needed on an admin workstation
       4) Server and Client  	Install both Client and Server packages
      
       5) Close              	Exit this Menu
       6) Quit               	Quit the Program
      
       Enter the number of the package set [6]:  3
      
    3. Choose an install path for the client packages.

      Normally the default location is acceptable.

      What is the path to the CD-ROM image [/cdrom/cdrom0]: /cdrom/suncluster_sc_2_2
      
    4. Install the client packages.

      Specify automatic installation.

      Installing Client packages
       
       Installing the following packages: SUNWscch SUNWccon SUNWccp
       SUNWcsnmp SUNWscsdb
       
                   >>>> Warning <<<<
         The installation process will run several scripts as root.  In
         addition, it may install setUID programs.  If you choose automatic
         mode, the installation of the chosen packages will proceed without
         any user interaction.  If you wish to manually control the install
         process you must choose the manual installation option.
       
       Choices:
       	manual						Interactively install each package
       	automatic						Install the selected packages with no user interaction.
       
       In addition, the following commands are supported:
          list						Show a list of the packages to be installed
          help						Show this command summary
          close						Return to previous menu
          quit						Quit the program
       
       
       Install mode [manual automatic] [automatic]:  automatic
      

      The scinstall(1M) program now installs the client packages. After the packages have been installed, the main scinstall(1M) menu is displayed. From the main menu, you can choose to verify the installation, then quit to exit scinstall(1M).

  6. Change the port number used by the Sun Cluster SNMP daemon and Solaris SNMP (smond).

    The default port used by Sun Cluster SNMP is the same as the default port number used by Solaris SNMP; both use port 161. Change the Sun Cluster SNMP port number using the procedure described in the appendix describign Sun Cluster SNMP management solutions in the Sun Cluster 2.2 System Administration Guide. You must stop and restart both the snmpd and smond daemons after changing the port number.

  7. Modify the /etc/clusters and /etc/serialports files.

    These files are installed automatically by scinstall(1M). Use the templates included in the files to add your cluster name, physical host names, terminal concentrator name, and serial port numbers, as listed on your installation worksheet. See the clusters(4) and serialports(4) man pages for details.


    Note -

    The serial port number used in the /etc/serialports file is the telnet(1) port number, not the physical port number. Determine the serial port number by adding 5000 to the physical port number. For example, if the physical port number is 6, the serial port number should be 5006.


    Proceed to the section "3.2.2 How to Install the Server Software" to install the Sun Cluster 2.2 server software.

3.2.2 How to Install the Server Software

After you have installed the Sun Cluster 2.2 client software on the administrative workstation, use this procedure to install Solaris and the Sun Cluster 2.2 server software on all cluster nodes.


Note -

This procedure assumes you are using an administrative workstation. If you are not, then connect directly to the console of each node using a telnet connection to the terminal concentrator. Install and configure the Sun Cluster software identically on each node.



Note -

For E10000 platforms, you must first log into the System Service Processor (SSP) and connect using the netcon command. Once connected, enter Shift~@ to unlock the console and gain write access.



Caution - Caution -

If you already have a volume manager installed and a root disk encapsulated, unencapsulate the root disk before beginning the Sun Cluster installation.


These are the high-level steps to install the server software:

These are the detailed steps to install the server software.

  1. Bring up the Cluster Control Panel from the administrative workstation.

    In this example, the cluster name is sc-cluster.

    # ccp sc-cluster
    

    Graphic
  2. Start the Cluster Console in console mode.

    From the Cluster Control Panel, select the Cluster Console, console mode. The Cluster Console (CC) will display one window for each cluster node, plus a small common window that you can use to command all windows simultaneously.

    Graphic
    Note -

    Individually, the windows act as vt100 terminal windows. Set your TERM type to equal vt100.


  3. Use the Cluster Console common window to install Solaris 2.6 or Solaris 7 on all nodes.

    For details, see the Solaris Advanced System Administration Guide, and the Solaris installation guidelines described in Chapter 2, Planning the Configuration.

    1. Partition the local disks on each node to Sun Cluster and volume manager guidelines.

      For partitioning guidelines, see "2.2.4 Planning Your Solaris Operating Environment Installation".

    2. Configure the OpenBoot PROM.

      If you want to boot from a SPARCstorage Array, you must configure the shared boot device, if you did not do so already during hardware installation. See "2.2.8.4 Booting From a SPARCstorage Array", for details about setting up the shared boot device. If your configuration includes copper-connected SCSI storage devices such as Sun StorEdge MultiPacks, Sun StorEdge A1000s, and Sun StorEdge A3x00s, you also need to configure the scsi-initiator-id. See your hardware installation manuals for details about configuring the scsi-initiator-id.

  4. Update the naming service.

    If a host name database such as NIS, NIS+, or DNS is used at your site, update the naming service with all logical and physical host names to be used in the Sun Cluster configuration.

  5. Use the Cluster Console common window to log into all nodes.

  6. Install Solaris patches.

    Check the patch database or contact your local service provider for any hardware or software patches required to run the Solaris operating environment, Sun Cluster 2.2, and any other software installed on your configuration.

    Install any required patches by following the instructions in the README file accompanying each patch, unless instructed otherwise by the Sun Cluster documentation or your service provider.

    Reboot all nodes if specified in the patch instructions.

  7. Modify the /etc/nsswitch.conf file.

    Ensure that "hosts," "services," and "group" lookups are directed to files first. For example:

    hosts: files nisplus
     services: files nisplus
     group: files nisplus
  8. (Optional) If your cluster serves more than one subnet, configure network adapter interfaces for additional secondary public networks.

  9. As root, invoke scinstall(1M) from the CC common window.

    # cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    # ./scinstall
    
     Installing: SUNWscins
    
     Installation of <SUNWscins> was successful.
    
     			Checking on installed package state............
    
     			<<Press return to continue>>
  10. Select the server package set.

    ==== Install/Upgrade Framework Selection Menu ==========================
     You can upgrade to the latest Sun Cluster packages or select package
     sets for installation, depending on the current state of installation.
     
     Choose one:
     1) Upgrade									Upgrade to Sun Cluster 2.2
     2) Server									All of the Sun Cluster packages needed on a server
     3) Client									All of the admin tools needed on an admin workstation
     4) Server and Client									All of the Client and Server packages
     
     5) Close									Exit this Menu
     6) Quit									Quit the Program
     
     Enter the number of the package set [6]:  2
    

    Press Return to continue.

  11. Install the server packages.

    Specify automatic installation. The scinstall(1M) program installs the server packages.

    Installing Server packages
     
     Installing the following packages: SUNWsclb SUNWsc SUNWccd SUNWcmm SUNWff 
     SUNWmond SUNWpnm SUNWscman SUNWsccf SUNWscmgr
     
                 >>>> Warning <<<<
       The installation process will run several scripts as root.  In
       addition, it may install setUID programs.  If you choose automatic
       mode, the installation of the chosen packages will proceed without
       any user interaction.  If you wish to manually control the install
       process you must choose the manual installation option.
     
     Choices:
     	manual					Interactively install each package
     	automatic					Install the selected packages with no user interaction.
     
     In addition, the following commands are supported:
     	list					Show a list of the packages to be installed
     	help					Show this command summary
     	close					Return to previous menu
     	quit					Quit the program
    
     Install mode [manual automatic] [automatic]: automatic
    

    The server package set is now installed.

  12. Select your volume manager.

    In this example, Solstice DiskSuite is specified.

    Volume Manager Selection
     
     Please choose the Volume Manager that will be used
     on this node:
     
     1) Cluster Volume Manager (CVM)
     2) Sun Enterprise Volume Manager (SEVM)
     3) Solstice DiskSuite (SDS)
    
     Choose the Volume Manager: 3
    
     Installing Solstice DiskSuite support packages.
     	Installing "SUNWdid" ... done
     	Installing "SUNWmdm" ... done
    
          ---------WARNING---------
     Solstice DiskSuite (SDS) will need to be installed before the cluster can
     be started.
    
             <<Press return to continue>>

    Note -

    You will still have to install the volume manager software from the Solstice DiskSuite, SSVM, or CVM media after you complete the cluster installation. This step installs only supporting software (such as drivers).



    Caution - Caution -

    If you perform upgrades or package removals with scinstall(1M), scinstall(1M) will not remove the SUNWdid package. Do NOT remove the SUNWdid package manually. Removing the package can cause loss of data.


  13. Specify the cluster name.

    What is the name of the cluster? sc-cluster
    
  14. Specify the number of potential nodes and active nodes in your cluster.

    You can specify up to four nodes. The active nodes are those you will physically connect and include in the cluster now. You must specify all potential nodes at this time; you will be asked for information such as node names and IP addresses. Later, you can change the status of nodes from potential to active by using the scconf(1M) command. See the section on adding and removing cluster nodes in the Sun Cluster 2.2 System Administration Guide.


    Note -

    If you want to add a node later that was not already specified as a potential node, you will have to reconfigure the entire cluster.


    How many potential nodes will sc-cluster have [4]? 3
    
     How many of the initially configured nodes will be active [3]? 3
    

    Note -

    If your cluster will have two active nodes and only two disk strings and your volume manager is Solstice DiskSuite, you must configure mediators. Do so after you configure Solstice DiskSuite but before you bring up the cluster. See the chapter on using dual-string mediators in the Sun Cluster 2.2 System Administration Guide for the procedure.


  15. Configure the private network interfaces, using the common window.

    Select either Ethernet or Scalable Coherent Interface (SCI).

    What type of network interface will be used for this configuration?
     (ether|SCI) [SCI]?

    If you choose SCI, the following screen is displayed. Answer the questions using the information on your installation worksheet. Note that the node name field is case-sensitive; the node names specified here are checked against the /etc/nodename file by scinstall.

    What is the hostname of node 0 [node0]? phys-hahost1
    
     What is the hostname of node 1 [node1]? phys-hahost2
    ...

    Note -

    When nodes are connected through an SCI switch, the connection of the nodes to the switch port determines the order of the nodes in the cluster. The node number must correspond to the port number. For example, if a node named phys-hahost1 is connected to port 0, then phys-hahost1 must be node 0. In addition, each node must be connected to the same port on each switch. For example, if phys-hahost1 is connected to port 0 on switch 0, it also must be connected to port 0 on switch 1.


    If you choose Ethernet, the following screen is displayed. Answer the questions using information from the installation worksheet. Complete the network configuration for all nodes in the cluster.

    What is the hostname of node 0 [node0]? phys-hahost1
    
     What is phys-hahost1's first private network interface [hme0]? hme0
    
     What is phys-hahost1's second private network interface [hme1]? hme1
    
     You will now be prompted for Ethernet addresses of
     the host. There is only one Ethernet address for each host
     regardless of the number of interfaces a host has. You can get
     this information in one of several ways:
    
     1. use the 'banner' command at the ok prompt,
     2. use the 'ifconfig -a' command (need to be root),
     3. use ping, arp and grep commands. ('ping exxon; arp -a | grep exxon')
    
     Ethernet addresses are given as six hexadecimal bytes separated by colons.
     ie, 01:23:45:67:89:ab
    
     What is phys-hahost1's ethernet address? 01:23:45:67:89:ab
    
     What is the hostname of node 1 [node1]?
     ...
  16. Specify whether the cluster will support any data services and if so, whether to set up logical hosts.

    Will this cluster support any HA data services (yes/no) [yes]? yes
    Okay to set up the logical hosts for those HA services now (yes/no) [yes]? yes
    
  17. Set up primary public networks and subnets.

    Enter the name of the network controller for the primary network for each node in the cluster.

    What is the primary public network controller for "phys-hahost1"?  hme2
    What is the primary public network controller for "phys-hahost2"?  hme2
    
  18. Set up secondary public subnets.

    If the cluster nodes will provide data services to more than a single public network, answer yes to this question:

    Does the cluster serve any secondary public subnets (yes/no) [no]?  yes
    
  19. Name the secondary public subnets.

    Assign a name to each subnet. Note that these names are used only for convenience during configuration. They are not stored in the configuration database and need not match the network names returned by networks(4).

    Please enter a unique name for each of these additional subnets:
     
             Subnet name (^D to finish):  sc-cluster-net1
            Subnet name (^D to finish):  sc-cluster-net2
            Subnet name (^D to finish):  ^D
    
     The list of secondary public subnets is:
     
             sc-cluster-net1
             sc-cluster-net2
    
     Is this list correct (yes/no) [yes]?
  20. Specify network controllers for the subnets.

    For each secondary subnet, specify the name of the network controller used on each cluster node.

    For subnet "sc-cluster-net1" ...
             What network controller is used for "phys-hahost1"?  qe0
            What network controller is used for "phys-hahost2"?  qe0
    
     For subnet "sc-cluster-net2" ...
             What network controller is used for "phys-hahost1"?  qe1
            What network controller is used for "phys-hahost2"?  qe1
    
  21. Initialize Network Adapter Failover (NAFO).

    You must initialize NAFO, and you must run pnmset(1M) later to configure the adapters. See the pnmset(1M) man page and the chapter on administering network interfaces in the Sun Cluster 2.2 System Administration Guide for more information about NAFO and PNM.

    Initialize NAFO on "phys-hahost1" with one ctlr per group (yes/no) [yes]?  y
    
  22. Set up logical hosts.

    Enter the list of logical hosts you want to add:
     
             Logical host (^D to finish):  hahost1
            Logical host (^D to finish):  hahost2
            Logical host (^D to finish):  ^D
    
     The list of logical hosts is:
     
             hahost1
            hahost2
    
     Is this list correct (yes/no) [yes]? y
    

    Note -

    You can add logical hosts or change the logical host configuration after the cluster is up by using scconf(1M) or the "Change" option to scinstall(1M). See the scinstall(1M) and scconf(1M) man pages, and Step 11 in the procedure "3.2.3 How to Configure the Cluster" for more information.



    Note -

    If you will be using the Sun Cluster HA for SAP data service, do not set up logical hosts now. Set them up with scconf(1M) after the cluster is up. See the scconf(1M) man page and Chapter 10, Installing and Configuring Sun Cluster HA for SAP," for more information.


  23. Assign default masters to logical hosts.

    You must specify the name of a physical host in the cluster as a default master for each logical host.

    What is the name of the default master for "hahost1"?  phys-hahost1
    

    Specify the host names of other physical hosts capable of mastering each logical host.

    Enter a list of other nodes capable of mastering "hahost1":
     
             Node name:  phys-hahost2
            Node name (^D to finish): ^D
    
     The list that you entered is:
     
             phys-hahost1
             phys-hahost2
     
     Is this list correct (yes/no) [yes]? yes
    
  24. Enable automatic failback.

    Answering yes enables the logical host to fail back automatically to its default master when the default master rejoins the cluster.

    Enable automatic failback for "hahost1" (yes/no) [no]?  yes
    
  25. Assign net names and disk group names.

    What is the net name for "hahost1" on subnet "sc-cluster-net1"? hahost1-pub1
    What is the net name for "hahost1" on subnet "sc-cluster-net2"? hahost1-pub2
    Disk group name for logical host "hahost1" [hahost1]?
     Is it okay to add logical host "hahost1" now (yes/no) [yes]? yes
    
     What is the name of the default master for "hahost2"?
     ...

    Continue until all logical hosts are set up.


    Note -

    To set up multiple disk groups on a single logical host, use the scconf(1M) command after you have used scinstall(1M) to configure and bring up the cluster. See the scconf(1M) man page for details.


  26. If your volume manager is SSVM or CVM and there are more than two nodes in the cluster, configure failure fencing.

    This screen will appear only for greater than two-node clusters using SSVM or CVM.

    Configuring Failure Fencing
    
     What type of architecture does phys-hahost1 have (E10000|other) [other]?
    
     What is the name of the Terminal Concentrator connected to the serial
     port of phys-hahost1 [NO_NAME]? sc-tc
    
     Is 123.456.789.1 the correct IP address for this Terminal Concentrator
     (yes | no) [yes]?
    
     What is the password for root of the Terminal Concentrator [?]
     Please enter the password for root again [?]
    
     Which physical port on the Terminal Concentrator is phys-hahost1 connected to:
    
     What type of architecture does phys-hahost2 have (E10000|other) [other]?
     Which Terminal Concentrator is phys-hahost2 connected to:
    
     0) sc-tc				123.456.789.1
     1) Create A New Terminal Concentrator Entry
    
     Select a device:
    
     Which physical port on the Terminal Concentrator is phys-hahost2 connected to:
    
     What type of architecture does phys-hahost3 have (E10000|other) [other]?
     Which Terminal Concentrator is phys-hahost3 connected to:
    
     0) sc-tc				123.456.789.1
     1) Create A New Terminal Concentrator Entry
    
     Select a device:
    
     Which physical port on the Terminal Concentrator is phys-hahost3 connected to:
    
     Finished Configuring Failure Fencing

    Caution - Caution -

    The SSP password is used in failure fencing. Failure to correctly set the SSP password might cause unpredictable results in the event of a node failure. If you change the SSP password, you must change it on the cluster as well, using scconf(1M). Otherwise, failure fencing will be disabled because the SSP cannot connect to the failed node. See the scconf(1M) man page and the Sun Cluster 2.2 System Administration Guide for details about changing the SSP password.


  27. If your volume manager is SSVM, your cluster has more than two nodes, and you have a direct-attached device, select a nodelock port.

    The port you select must be on a terminal concentrator attached to a node in the cluster.

    Does the cluster have a disk storage device that is 
    connected to all nodes in the cluster [no]? yes
    
    Which unused physical port on the Terminal Concentrator is to be used for
    node locking:
  28. If your volume manager is SSVM or CVM, select quorum devices.

    If your volume manager is SSVM or CVM, you are prompted to select quorum devices. The screen display varies according to your cluster topology. Select a device from the list presented. This example shows a two-node cluster.

    Getting device information for reachable nodes in the cluster.
     This may take a few seconds to a few minutes...done
     Select quorum device for the following nodes:
     0 (phys-hahost1)
     and
     1 (phys-hahost2)
    
     1) SSA:000000779A16
     2) SSA:000000741430
     3) DISK:c0t1d0s2:01799413
     Quorum device: 1
    ...
     SSA with WWN 000000779A16 has been chosen as the quorum device.
     
     Finished Quorum Selection
  29. If your cluster has greater than two nodes, select Cluster Membership Monitor behavior.

    In the event that the cluster is partitioned into two or more subsets of
     nodes, the Cluster Membership Monitor may request input from the operator as
     to how it should proceed (abort or form a cluster) within each subset.  The
     Cluster Membership Monitor can be configured to make a policy-dependent
     automatic selection of a subset to become the next reconfiguration of
     the cluster.
     
     In case the cluster partitions into subsets, which subset should stay up?
         ask)    the system will always ask the operator.
         select) automatic selection of which subset should stay up.
    
     Please enter your choice (ask|select) [ask]: 

    If you choose "select," you are asked to choose between two policies:

    Please enter your choice (ask|select) [ask]: select
    
     You have a choice of two policies: 
    
     	lowest -- The subset containing the node with the lowest node ID value
     					automatically becomes the new cluster.  All other subsets must be
     				manually aborted.
    
     	highest -- The subset containing the node with the highest node ID value
     				automatically becomes the new cluster.  All other subsets must be
     				manually aborted.
    
     Select the selection policy for handling partitions (lowest|highest)
     [lowest]:

    The scinstall(1M) program now finishes installing the Sun Cluster 2.2 server packages.

    Installing ethernet Network Interface packages.
     
     Installing the following packages: SUNWsma
         Installing "SUNWsma" ... done
     
             Checking on installed package state.........
  30. Select your data services.

    Note that Sun Cluster HA for NFS and Informix-Online XPS are installed automatically with the Server package set.

    ==== Select Data Services Menu ==========================
     
     Please select which of the following data services are to 
     be installed onto this cluster.  Select singly, or in a 
     space separated list.
     Note: HA-NFS and Informix Parallel Server (XPS) are 
     installed automatically with the Server Framework.
     
     You may de-select a data service by selecting it a second time.
     
     Select DONE when finished selecting the configuration.
     
     				1) Sun Cluster HA for Oracle
     				2) Sun Cluster HA for Informix
     				3) Sun Cluster HA for Sybase
     				4) Sun Cluster HA for Netscape
     				5) Sun Cluster HA for Netscape LDAP
     				6) Sun Cluster HA for Lotus
     				7) Sun Cluster HA for Tivoli
     				8) Sun Cluster HA for SAP
     				9) Sun Cluster HA for DNS
     				10) Sun Cluster for Oracle Parallel Server
    
     INSTALL				11) No Data Services
     				12) DONE
     
     Choose a data service: 3
    
     What is the path to the CD-ROM image [/cdrom/suncluster_sc_2_2]:
    
     Install mode [manual automatic] [automatic]:  automatic
    ...
     Select DONE when finished selecting the configuration.
     ...

    Note -

    Do not install the OPS data service unless you are using Cluster Volume Manager. OPS will not run with Sun StorEdge Volume Manager or Solstice DiskSuite.


  31. Quit scinstall(1M).

    ============ Main Menu =================
     
     1) Install/Upgrade - Install or Upgrade Server Packages or Install Client
     								Packages.
     2) Remove  - Remove Server or Client Packages.
     3) Change  - Modify cluster or data service configuration
     4) Verify  - Verify installed package sets.
     5) List    - List installed package sets.
    
     6) Quit    - Quit this program.
     7) Help    - The help screen for this menu.
    
     Please choose one of the menu items: [6]:  6
     ...
    

    The scinstall(1M) program now verifies installation of the packages you selected.

    ==== Verify Package Installation ==========================
     Installation
     	All of the install										packages have been installed
     Framework
     	All of the client										packages have been installed
     	All of the server										packages have been installed
     Communications
     	All of the SMA										packages have been installed
     Data Services
     	None of the Sun Cluster HA for Oracle packages have been installed
     	None of the Sun Cluster HA for Informix packages have been installed
     	None of the Sun Cluster HA for Sybase packages have been installed
     	None of the Sun Cluster HA for Netscape packages have been installed
     	None of the Sun Cluster HA for Lotus packages have been installed
     	None of the Sun Cluster HA for Tivoli packages have been installed
     	None of the Sun Cluster HA for SAP packages have been installed
     	None of the Sun Cluster HA for Netscape LDAP packages have been installed
     	None of the Sun Cluster HA for Oracle Parallel Server packages have been
     installed
     #

    Proceed to the section "3.2.3 How to Configure the Cluster" to configure the cluster.

3.2.3 How to Configure the Cluster

After installing the Sun Cluster 2.2 client and server packages, complete the following post-installation tasks.

This is the high-level list of steps to perform to configure the cluster:

These are the detailed steps to configure the cluster.

  1. Set up the software directory paths on all nodes.

    1. On all nodes, set your PATH to include /sbin, /usr/sbin, /opt/SUNWcluster/bin, and /opt/SUNWpnm/bin. Set your MANPATH to include /opt/SUNWcluster/man.

    2. On all nodes, set your PATH and MANPATH to include the volume manager specific paths.

      For SSVM and CVM, set your PATH to include /opt/SUNWvxva/bin and /etc/vx/bin. Set your MANPATH to include /opt/SUNWvxva/man and /opt/SUNWvxvm/man.

      For Solstice DiskSuite, set your PATH to include /usr/opt/SUNWmd/sbin. Set your MANPATH to include /usr/opt/SUNWmd/man.

    3. If you are using Scalable Coherent Interface (SCI) for the private interfaces, set the SCI paths.

      Set your PATH to include /opt/SUNWsci/bin, /opt/SUNWscid/bin, and /opt/SUNWsma/bin. Set your MANPATH to include /opt/SUNWsma/man.

  2. Add IP addresses to the /.rhosts file.

    You must include the following hardcoded private network IP addresses in the /.rhosts files on all nodes. For a two node cluster, include only the addresses specified for nodes 0 and 1 below. For a three node cluster, include the addresses specified for nodes 0, 1, and 2 below. For a four node cluster, include all addresses noted below:

    # node 0
     204.152.65.33
     204.152.65.1
     204.152.65.17
    
     # node 1
     204.152.65.34
     204.152.65.2
     204.152.65.18
    
     # node 2
     204.152.65.35
     204.152.65.3
     204.152.65.19
    
     # node 3
     204.152.65.36
     204.152.65.4
     204.152.65.20

    Note -

    If you fail to include the private network IP addresses in /.rhosts, the hadsconfig(1M) script will be unable to automatically replicate data service configuration information to all nodes when you configure your data services. You will then need to replicate the configuration file manually as described in the hadsconfig(1M) man page.


  3. If you are using SCI for the private interfaces and if you specified any potential nodes during server software installation, modify the sm_config file.

    During server software installation with scinstall(1M), you specified active and potential nodes. Edit the sm_config file now to comment out the host names of the potential nodes, by prepending the characters "_%" to those host names. In this example sm_config file, phys-host1 and phys-host2 are the active nodes, and phys-host3 and phys-host4 are potential nodes to be added to the cluster later.

    HOST 0	= phys-host1
     HOST 1	= phys-host2
     HOST 2	= _%phys-host3
     HOST 3	= _%phys-host4
  4. If you are using SCI for the private interfaces, configure the switches with the sm_config(1M) command.

    You must edit a copy of the sm_config template file (template.sc located in /opt/SUNWsma/bin/Examples) before running the sm_config(1M) command. See the sm_config(1M) man page and the procedure describing how to add switches and SCI cards in the Sun Cluster 2.2 System Administration Guide for details.


    Caution - Caution -

    Run the sm_config(1M) command on only one node.


    # sm_config -f templatefile
    
  5. Install Sun Cluster 2.2 patches.

    Check the patch database or contact your local service provider for any hardware or software patches required to run Sun Cluster 2.2.

    Install any required patches by following the instructions in the README file accompanying each patch.

  6. Reboot all nodes.

    This reboot creates device files for the Sun Cluster device drivers installed by scinstall(1M), and also might be required by some patches you installed in Step 5.


    Caution - Caution -

    You must reboot all nodes at this time, even if you did not install SCI or patches.


  7. (SSVM or CVM only) Install and configure SSVM or CVM.

    Install and configure your volume manager and volume manager patches, using your volume manager documentation.

    This process includes installing the volume manager and patches, creating plexes and volumes, setting up the HA administrative file system (SSVM only), and updating the vfstab.logicalhost files (SSVM only). Refer to Chapter 2, Planning the Configuration, and to Appendix C, Configuring Sun StorEdge Volume Manager and Cluster Volume Manager, for details. For CVM, refer also to the section on installing Cluster Volume Manager in the Sun Cluster 2.2 Cluster Volume Manager Guide.

    Create and populate disk groups and volumes now, but release them before continuing.

  8. Configure NAFO backup groups, if you did not do so already.

    During initial installation, you can use the scinstall(1M) command to install the PNM package, SUNWpnm, to configure one controller per NAFO backup group and to initialize PNM.


    Note -

    You must configure a public network adaptor with either scinstall(1M) or pnmset(1M), even if you have only one public network connection per node.


    Run the pnmset(1M) command now if you did not already use scinstall(1M) to configure controllers and initialize PNM, or if you want to assign more than one controller per NAFO backup group. The pnmset(1M) command runs as an interactive script.

    # /opt/SUNWpnm/bin/pnmset
    

    See the chapter on administering network interfaces in the Sun Cluster 2.2 System Administration Guide or the pnmset(1M) man page for details.

  9. Start the cluster.


    Note -

    If you are using Solstice DiskSuite and you set up logical hosts as part of the server software installation (Step 22 of the procedure "3.2.2 How to Install the Server Software"), you will see error messages as you start the cluster and it attempts to bring the logical hosts online. The messages will indicate that the Solstice DiskSuite disksets have not been set up. You can safely ignore these messages as you will set up the disksets in Step 10.


    1. Run the following command on one node.

      # scadmin startcluster phys-hahost1 sc-cluster
      

      Note -

      If your volume manager is Cluster Volume Manager, you must set up shared disk groups at this point, before the other nodes are added to the cluster.


    2. Add all other nodes to the cluster by running the following command from each node being added.

      # scadmin startnode
      
    3. Verify that the cluster is running.

      From any cluster node, check activity with hastat(1M):

      # hastat
      
  10. (Solstice DiskSuite only) Install and configure Solstice DiskSuite.

    This process includes installing the volume manager and patches, creating disksets, setting up the HA administrative file system, and updating the vfstab.logicalhost files. Refer to Chapter 2, Planning the Configuration, and to Appendix B, Configuring Solstice DiskSuite, for details.

    Create and populate disk groups and volumes now, but release them before continuing.

    If you have a two-node configuration with only two disk strings, you also must set up mediators. Do so after configuring Solstice DiskSuite. See the chapter on using dual-string mediators in the Sun Cluster 2.2 System Administration Guide for instructions.

  11. Add logical hosts, if you did not do so already.

    Use the "Change" option to scinstall(1M) to add and configure logical hosts, if you did not set up all logical hosts during initial installation, or if you want to change the logical host configuration.

    To set up multiple disk groups on a single logical host, you must use the scconf(1M) command, after you have brought up the cluster. See the scconf(1M) man page for details.

    See the section on adding and removing logical hosts in the Sun Cluster 2.2 System Administration Guide, for more information.


    Note -

    When you use scinstall(1M) to add logical hosts initially, you run the command from all hosts before the cluster has been brought up. When you use scinstall(1M) to re-configure existing logical hosts, you run the command from only one node while the cluster is up.


  12. Add logical host names to the /etc/hosts files on all nodes.

    For example:

    #
     # Internet host table
     #
     127.0.0.1	 	 		 	 	localhost
     123.168.65.23	 	 phys-hahost1      loghost
     123.146.84.36	 	 123.146.84.36
     123.168.65.21	 	 hahost1
     123.168.65.22	 	 hahost2
  13. Bring the logical hosts on line.

    Use haswitch(1M) to force a cluster reconfiguration that will cause all logical hosts to be mastered by their default masters.

    # haswitch -r
    
  14. (Optional) If your cluster has only two nodes and your volume manager is SSVM, configure the shared CCD volume.

    Use the procedures described in Appendix C, Configuring Sun StorEdge Volume Manager and Cluster Volume Manager, to configure a shared CCD volume.

  15. Configure and activate the HA data services.

    See the relevant data service chapter in this book, and the specific data service documentation for details.

  16. Set up and start Sun Cluster Manager.

    Sun Cluster Manager is used to monitor the cluster. For instructions, see the Sun Cluster 2.2 Release Notes and the section on monitoring the Sun Cluster servers with Sun Cluster Manager in the Sun Cluster 2.2 System Administration Guide.

    This completes the cluster configuration.