Sun Cluster 2.2 Software Installation Guide

3.2.2 How to Install the Server Software

After you have installed the Sun Cluster 2.2 client software on the administrative workstation, use this procedure to install Solaris and the Sun Cluster 2.2 server software on all cluster nodes.


Note -

This procedure assumes you are using an administrative workstation. If you are not, then connect directly to the console of each node using a telnet connection to the terminal concentrator. Install and configure the Sun Cluster software identically on each node.



Note -

For E10000 platforms, you must first log into the System Service Processor (SSP) and connect using the netcon command. Once connected, enter Shift~@ to unlock the console and gain write access.



Caution - Caution -

If you already have a volume manager installed and a root disk encapsulated, unencapsulate the root disk before beginning the Sun Cluster installation.


These are the high-level steps to install the server software:

These are the detailed steps to install the server software.

  1. Bring up the Cluster Control Panel from the administrative workstation.

    In this example, the cluster name is sc-cluster.

    # ccp sc-cluster
    

    Graphic
  2. Start the Cluster Console in console mode.

    From the Cluster Control Panel, select the Cluster Console, console mode. The Cluster Console (CC) will display one window for each cluster node, plus a small common window that you can use to command all windows simultaneously.

    Graphic
    Note -

    Individually, the windows act as vt100 terminal windows. Set your TERM type to equal vt100.


  3. Use the Cluster Console common window to install Solaris 2.6 or Solaris 7 on all nodes.

    For details, see the Solaris Advanced System Administration Guide, and the Solaris installation guidelines described in Chapter 2, Planning the Configuration.

    1. Partition the local disks on each node to Sun Cluster and volume manager guidelines.

      For partitioning guidelines, see "2.2.4 Planning Your Solaris Operating Environment Installation".

    2. Configure the OpenBoot PROM.

      If you want to boot from a SPARCstorage Array, you must configure the shared boot device, if you did not do so already during hardware installation. See "2.2.8.4 Booting From a SPARCstorage Array", for details about setting up the shared boot device. If your configuration includes copper-connected SCSI storage devices such as Sun StorEdge MultiPacks, Sun StorEdge A1000s, and Sun StorEdge A3x00s, you also need to configure the scsi-initiator-id. See your hardware installation manuals for details about configuring the scsi-initiator-id.

  4. Update the naming service.

    If a host name database such as NIS, NIS+, or DNS is used at your site, update the naming service with all logical and physical host names to be used in the Sun Cluster configuration.

  5. Use the Cluster Console common window to log into all nodes.

  6. Install Solaris patches.

    Check the patch database or contact your local service provider for any hardware or software patches required to run the Solaris operating environment, Sun Cluster 2.2, and any other software installed on your configuration.

    Install any required patches by following the instructions in the README file accompanying each patch, unless instructed otherwise by the Sun Cluster documentation or your service provider.

    Reboot all nodes if specified in the patch instructions.

  7. Modify the /etc/nsswitch.conf file.

    Ensure that "hosts," "services," and "group" lookups are directed to files first. For example:

    hosts: files nisplus
     services: files nisplus
     group: files nisplus
  8. (Optional) If your cluster serves more than one subnet, configure network adapter interfaces for additional secondary public networks.

  9. As root, invoke scinstall(1M) from the CC common window.

    # cd /cdrom/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.x/Tools
    # ./scinstall
    
     Installing: SUNWscins
    
     Installation of <SUNWscins> was successful.
    
     			Checking on installed package state............
    
     			<<Press return to continue>>
  10. Select the server package set.

    ==== Install/Upgrade Framework Selection Menu ==========================
     You can upgrade to the latest Sun Cluster packages or select package
     sets for installation, depending on the current state of installation.
     
     Choose one:
     1) Upgrade									Upgrade to Sun Cluster 2.2
     2) Server									All of the Sun Cluster packages needed on a server
     3) Client									All of the admin tools needed on an admin workstation
     4) Server and Client									All of the Client and Server packages
     
     5) Close									Exit this Menu
     6) Quit									Quit the Program
     
     Enter the number of the package set [6]:  2
    

    Press Return to continue.

  11. Install the server packages.

    Specify automatic installation. The scinstall(1M) program installs the server packages.

    Installing Server packages
     
     Installing the following packages: SUNWsclb SUNWsc SUNWccd SUNWcmm SUNWff 
     SUNWmond SUNWpnm SUNWscman SUNWsccf SUNWscmgr
     
                 >>>> Warning <<<<
       The installation process will run several scripts as root.  In
       addition, it may install setUID programs.  If you choose automatic
       mode, the installation of the chosen packages will proceed without
       any user interaction.  If you wish to manually control the install
       process you must choose the manual installation option.
     
     Choices:
     	manual					Interactively install each package
     	automatic					Install the selected packages with no user interaction.
     
     In addition, the following commands are supported:
     	list					Show a list of the packages to be installed
     	help					Show this command summary
     	close					Return to previous menu
     	quit					Quit the program
    
     Install mode [manual automatic] [automatic]: automatic
    

    The server package set is now installed.

  12. Select your volume manager.

    In this example, Solstice DiskSuite is specified.

    Volume Manager Selection
     
     Please choose the Volume Manager that will be used
     on this node:
     
     1) Cluster Volume Manager (CVM)
     2) Sun Enterprise Volume Manager (SEVM)
     3) Solstice DiskSuite (SDS)
    
     Choose the Volume Manager: 3
    
     Installing Solstice DiskSuite support packages.
     	Installing "SUNWdid" ... done
     	Installing "SUNWmdm" ... done
    
          ---------WARNING---------
     Solstice DiskSuite (SDS) will need to be installed before the cluster can
     be started.
    
             <<Press return to continue>>

    Note -

    You will still have to install the volume manager software from the Solstice DiskSuite, SSVM, or CVM media after you complete the cluster installation. This step installs only supporting software (such as drivers).



    Caution - Caution -

    If you perform upgrades or package removals with scinstall(1M), scinstall(1M) will not remove the SUNWdid package. Do NOT remove the SUNWdid package manually. Removing the package can cause loss of data.


  13. Specify the cluster name.

    What is the name of the cluster? sc-cluster
    
  14. Specify the number of potential nodes and active nodes in your cluster.

    You can specify up to four nodes. The active nodes are those you will physically connect and include in the cluster now. You must specify all potential nodes at this time; you will be asked for information such as node names and IP addresses. Later, you can change the status of nodes from potential to active by using the scconf(1M) command. See the section on adding and removing cluster nodes in the Sun Cluster 2.2 System Administration Guide.


    Note -

    If you want to add a node later that was not already specified as a potential node, you will have to reconfigure the entire cluster.


    How many potential nodes will sc-cluster have [4]? 3
    
     How many of the initially configured nodes will be active [3]? 3
    

    Note -

    If your cluster will have two active nodes and only two disk strings and your volume manager is Solstice DiskSuite, you must configure mediators. Do so after you configure Solstice DiskSuite but before you bring up the cluster. See the chapter on using dual-string mediators in the Sun Cluster 2.2 System Administration Guide for the procedure.


  15. Configure the private network interfaces, using the common window.

    Select either Ethernet or Scalable Coherent Interface (SCI).

    What type of network interface will be used for this configuration?
     (ether|SCI) [SCI]?

    If you choose SCI, the following screen is displayed. Answer the questions using the information on your installation worksheet. Note that the node name field is case-sensitive; the node names specified here are checked against the /etc/nodename file by scinstall.

    What is the hostname of node 0 [node0]? phys-hahost1
    
     What is the hostname of node 1 [node1]? phys-hahost2
    ...

    Note -

    When nodes are connected through an SCI switch, the connection of the nodes to the switch port determines the order of the nodes in the cluster. The node number must correspond to the port number. For example, if a node named phys-hahost1 is connected to port 0, then phys-hahost1 must be node 0. In addition, each node must be connected to the same port on each switch. For example, if phys-hahost1 is connected to port 0 on switch 0, it also must be connected to port 0 on switch 1.


    If you choose Ethernet, the following screen is displayed. Answer the questions using information from the installation worksheet. Complete the network configuration for all nodes in the cluster.

    What is the hostname of node 0 [node0]? phys-hahost1
    
     What is phys-hahost1's first private network interface [hme0]? hme0
    
     What is phys-hahost1's second private network interface [hme1]? hme1
    
     You will now be prompted for Ethernet addresses of
     the host. There is only one Ethernet address for each host
     regardless of the number of interfaces a host has. You can get
     this information in one of several ways:
    
     1. use the 'banner' command at the ok prompt,
     2. use the 'ifconfig -a' command (need to be root),
     3. use ping, arp and grep commands. ('ping exxon; arp -a | grep exxon')
    
     Ethernet addresses are given as six hexadecimal bytes separated by colons.
     ie, 01:23:45:67:89:ab
    
     What is phys-hahost1's ethernet address? 01:23:45:67:89:ab
    
     What is the hostname of node 1 [node1]?
     ...
  16. Specify whether the cluster will support any data services and if so, whether to set up logical hosts.

    Will this cluster support any HA data services (yes/no) [yes]? yes
    Okay to set up the logical hosts for those HA services now (yes/no) [yes]? yes
    
  17. Set up primary public networks and subnets.

    Enter the name of the network controller for the primary network for each node in the cluster.

    What is the primary public network controller for "phys-hahost1"?  hme2
    What is the primary public network controller for "phys-hahost2"?  hme2
    
  18. Set up secondary public subnets.

    If the cluster nodes will provide data services to more than a single public network, answer yes to this question:

    Does the cluster serve any secondary public subnets (yes/no) [no]?  yes
    
  19. Name the secondary public subnets.

    Assign a name to each subnet. Note that these names are used only for convenience during configuration. They are not stored in the configuration database and need not match the network names returned by networks(4).

    Please enter a unique name for each of these additional subnets:
     
             Subnet name (^D to finish):  sc-cluster-net1
            Subnet name (^D to finish):  sc-cluster-net2
            Subnet name (^D to finish):  ^D
    
     The list of secondary public subnets is:
     
             sc-cluster-net1
             sc-cluster-net2
    
     Is this list correct (yes/no) [yes]?
  20. Specify network controllers for the subnets.

    For each secondary subnet, specify the name of the network controller used on each cluster node.

    For subnet "sc-cluster-net1" ...
             What network controller is used for "phys-hahost1"?  qe0
            What network controller is used for "phys-hahost2"?  qe0
    
     For subnet "sc-cluster-net2" ...
             What network controller is used for "phys-hahost1"?  qe1
            What network controller is used for "phys-hahost2"?  qe1
    
  21. Initialize Network Adapter Failover (NAFO).

    You must initialize NAFO, and you must run pnmset(1M) later to configure the adapters. See the pnmset(1M) man page and the chapter on administering network interfaces in the Sun Cluster 2.2 System Administration Guide for more information about NAFO and PNM.

    Initialize NAFO on "phys-hahost1" with one ctlr per group (yes/no) [yes]?  y
    
  22. Set up logical hosts.

    Enter the list of logical hosts you want to add:
     
             Logical host (^D to finish):  hahost1
            Logical host (^D to finish):  hahost2
            Logical host (^D to finish):  ^D
    
     The list of logical hosts is:
     
             hahost1
            hahost2
    
     Is this list correct (yes/no) [yes]? y
    

    Note -

    You can add logical hosts or change the logical host configuration after the cluster is up by using scconf(1M) or the "Change" option to scinstall(1M). See the scinstall(1M) and scconf(1M) man pages, and Step 11 in the procedure "3.2.3 How to Configure the Cluster" for more information.



    Note -

    If you will be using the Sun Cluster HA for SAP data service, do not set up logical hosts now. Set them up with scconf(1M) after the cluster is up. See the scconf(1M) man page and Chapter 10, Installing and Configuring Sun Cluster HA for SAP," for more information.


  23. Assign default masters to logical hosts.

    You must specify the name of a physical host in the cluster as a default master for each logical host.

    What is the name of the default master for "hahost1"?  phys-hahost1
    

    Specify the host names of other physical hosts capable of mastering each logical host.

    Enter a list of other nodes capable of mastering "hahost1":
     
             Node name:  phys-hahost2
            Node name (^D to finish): ^D
    
     The list that you entered is:
     
             phys-hahost1
             phys-hahost2
     
     Is this list correct (yes/no) [yes]? yes
    
  24. Enable automatic failback.

    Answering yes enables the logical host to fail back automatically to its default master when the default master rejoins the cluster.

    Enable automatic failback for "hahost1" (yes/no) [no]?  yes
    
  25. Assign net names and disk group names.

    What is the net name for "hahost1" on subnet "sc-cluster-net1"? hahost1-pub1
    What is the net name for "hahost1" on subnet "sc-cluster-net2"? hahost1-pub2
    Disk group name for logical host "hahost1" [hahost1]?
     Is it okay to add logical host "hahost1" now (yes/no) [yes]? yes
    
     What is the name of the default master for "hahost2"?
     ...

    Continue until all logical hosts are set up.


    Note -

    To set up multiple disk groups on a single logical host, use the scconf(1M) command after you have used scinstall(1M) to configure and bring up the cluster. See the scconf(1M) man page for details.


  26. If your volume manager is SSVM or CVM and there are more than two nodes in the cluster, configure failure fencing.

    This screen will appear only for greater than two-node clusters using SSVM or CVM.

    Configuring Failure Fencing
    
     What type of architecture does phys-hahost1 have (E10000|other) [other]?
    
     What is the name of the Terminal Concentrator connected to the serial
     port of phys-hahost1 [NO_NAME]? sc-tc
    
     Is 123.456.789.1 the correct IP address for this Terminal Concentrator
     (yes | no) [yes]?
    
     What is the password for root of the Terminal Concentrator [?]
     Please enter the password for root again [?]
    
     Which physical port on the Terminal Concentrator is phys-hahost1 connected to:
    
     What type of architecture does phys-hahost2 have (E10000|other) [other]?
     Which Terminal Concentrator is phys-hahost2 connected to:
    
     0) sc-tc				123.456.789.1
     1) Create A New Terminal Concentrator Entry
    
     Select a device:
    
     Which physical port on the Terminal Concentrator is phys-hahost2 connected to:
    
     What type of architecture does phys-hahost3 have (E10000|other) [other]?
     Which Terminal Concentrator is phys-hahost3 connected to:
    
     0) sc-tc				123.456.789.1
     1) Create A New Terminal Concentrator Entry
    
     Select a device:
    
     Which physical port on the Terminal Concentrator is phys-hahost3 connected to:
    
     Finished Configuring Failure Fencing

    Caution - Caution -

    The SSP password is used in failure fencing. Failure to correctly set the SSP password might cause unpredictable results in the event of a node failure. If you change the SSP password, you must change it on the cluster as well, using scconf(1M). Otherwise, failure fencing will be disabled because the SSP cannot connect to the failed node. See the scconf(1M) man page and the Sun Cluster 2.2 System Administration Guide for details about changing the SSP password.


  27. If your volume manager is SSVM, your cluster has more than two nodes, and you have a direct-attached device, select a nodelock port.

    The port you select must be on a terminal concentrator attached to a node in the cluster.

    Does the cluster have a disk storage device that is 
    connected to all nodes in the cluster [no]? yes
    
    Which unused physical port on the Terminal Concentrator is to be used for
    node locking:
  28. If your volume manager is SSVM or CVM, select quorum devices.

    If your volume manager is SSVM or CVM, you are prompted to select quorum devices. The screen display varies according to your cluster topology. Select a device from the list presented. This example shows a two-node cluster.

    Getting device information for reachable nodes in the cluster.
     This may take a few seconds to a few minutes...done
     Select quorum device for the following nodes:
     0 (phys-hahost1)
     and
     1 (phys-hahost2)
    
     1) SSA:000000779A16
     2) SSA:000000741430
     3) DISK:c0t1d0s2:01799413
     Quorum device: 1
    ...
     SSA with WWN 000000779A16 has been chosen as the quorum device.
     
     Finished Quorum Selection
  29. If your cluster has greater than two nodes, select Cluster Membership Monitor behavior.

    In the event that the cluster is partitioned into two or more subsets of
     nodes, the Cluster Membership Monitor may request input from the operator as
     to how it should proceed (abort or form a cluster) within each subset.  The
     Cluster Membership Monitor can be configured to make a policy-dependent
     automatic selection of a subset to become the next reconfiguration of
     the cluster.
     
     In case the cluster partitions into subsets, which subset should stay up?
         ask)    the system will always ask the operator.
         select) automatic selection of which subset should stay up.
    
     Please enter your choice (ask|select) [ask]: 

    If you choose "select," you are asked to choose between two policies:

    Please enter your choice (ask|select) [ask]: select
    
     You have a choice of two policies: 
    
     	lowest -- The subset containing the node with the lowest node ID value
     					automatically becomes the new cluster.  All other subsets must be
     				manually aborted.
    
     	highest -- The subset containing the node with the highest node ID value
     				automatically becomes the new cluster.  All other subsets must be
     				manually aborted.
    
     Select the selection policy for handling partitions (lowest|highest)
     [lowest]:

    The scinstall(1M) program now finishes installing the Sun Cluster 2.2 server packages.

    Installing ethernet Network Interface packages.
     
     Installing the following packages: SUNWsma
         Installing "SUNWsma" ... done
     
             Checking on installed package state.........
  30. Select your data services.

    Note that Sun Cluster HA for NFS and Informix-Online XPS are installed automatically with the Server package set.

    ==== Select Data Services Menu ==========================
     
     Please select which of the following data services are to 
     be installed onto this cluster.  Select singly, or in a 
     space separated list.
     Note: HA-NFS and Informix Parallel Server (XPS) are 
     installed automatically with the Server Framework.
     
     You may de-select a data service by selecting it a second time.
     
     Select DONE when finished selecting the configuration.
     
     				1) Sun Cluster HA for Oracle
     				2) Sun Cluster HA for Informix
     				3) Sun Cluster HA for Sybase
     				4) Sun Cluster HA for Netscape
     				5) Sun Cluster HA for Netscape LDAP
     				6) Sun Cluster HA for Lotus
     				7) Sun Cluster HA for Tivoli
     				8) Sun Cluster HA for SAP
     				9) Sun Cluster HA for DNS
     				10) Sun Cluster for Oracle Parallel Server
    
     INSTALL				11) No Data Services
     				12) DONE
     
     Choose a data service: 3
    
     What is the path to the CD-ROM image [/cdrom/suncluster_sc_2_2]:
    
     Install mode [manual automatic] [automatic]:  automatic
    ...
     Select DONE when finished selecting the configuration.
     ...

    Note -

    Do not install the OPS data service unless you are using Cluster Volume Manager. OPS will not run with Sun StorEdge Volume Manager or Solstice DiskSuite.


  31. Quit scinstall(1M).

    ============ Main Menu =================
     
     1) Install/Upgrade - Install or Upgrade Server Packages or Install Client
     								Packages.
     2) Remove  - Remove Server or Client Packages.
     3) Change  - Modify cluster or data service configuration
     4) Verify  - Verify installed package sets.
     5) List    - List installed package sets.
    
     6) Quit    - Quit this program.
     7) Help    - The help screen for this menu.
    
     Please choose one of the menu items: [6]:  6
     ...
    

    The scinstall(1M) program now verifies installation of the packages you selected.

    ==== Verify Package Installation ==========================
     Installation
     	All of the install										packages have been installed
     Framework
     	All of the client										packages have been installed
     	All of the server										packages have been installed
     Communications
     	All of the SMA										packages have been installed
     Data Services
     	None of the Sun Cluster HA for Oracle packages have been installed
     	None of the Sun Cluster HA for Informix packages have been installed
     	None of the Sun Cluster HA for Sybase packages have been installed
     	None of the Sun Cluster HA for Netscape packages have been installed
     	None of the Sun Cluster HA for Lotus packages have been installed
     	None of the Sun Cluster HA for Tivoli packages have been installed
     	None of the Sun Cluster HA for SAP packages have been installed
     	None of the Sun Cluster HA for Netscape LDAP packages have been installed
     	None of the Sun Cluster HA for Oracle Parallel Server packages have been
     installed
     #

    Proceed to the section "3.2.3 How to Configure the Cluster" to configure the cluster.