Sun Cluster 3.1 Software Installation Guide

How to Install Solaris and Sun Cluster Software (JumpStart)

This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris and Sun Cluster software on all cluster nodes in a single operation and establish the cluster. You can also use this procedure to add new nodes to an existing cluster.

  1. Ensure that the hardware setup is complete and connections are verified before you install Solaris software.

    See the Sun Cluster 3.x Hardware Administration Manual and your server and storage device documentation for details on how to set up the hardware.

  2. Ensure that your cluster configuration planning is complete.

    See How to Prepare for Cluster Software Installation for requirements and guidelines.

  3. Have available the following information.

    • The Ethernet address of each cluster node

    • The following completed configuration planning worksheets:

      • “Local File Systems With Mirrored Root” in Sun Cluster 3.1 Release Notes or “Local File Systems with Non-Mirrored Root” in Sun Cluster 3.1 Release Notes

      • “Cluster and Node Names Worksheet” in Sun Cluster 3.1 Release Notes

      • “Cluster Interconnect Worksheet” in Sun Cluster 3.1 Release Notes

    See Planning the Solaris Operating Environment and Planning the Sun Cluster Environment for planning guidelines.

  4. Are you using a naming service?

    • If no, go to Step 5. You will set up the necessary hostname information in Step 31.

    • If yes, add address-to-name mappings for all public hostnames and logical addresses, as well as the IP address and hostname of the JumpStart server, to any naming services (such as NIS or DNS) used by clients for access to cluster services. See IP Addresses for planning guidelines. See your Solaris system administrator documentation for information about using Solaris naming services.

  5. Are you installing a new node to an existing cluster?

  6. As superuser, set up the JumpStart install server for Solaris operating environment installation.

    See the setup_install_server(1M) and add_install_client(1M) man pages and “Preparing Custom JumpStart Installations” in Solaris 8 Advanced Installation Guide or “Preparing Custom JumpStart Installations (Tasks)” in Solaris 9 Installation Guide for instructions on how to set up a JumpStart install server.

    When you set up the install server, ensure that the following requirements are met.

    • The install server is on the same subnet as the cluster nodes, but is not itself a cluster node.

    • The install server installs the release of the Solaris operating environment required by the Sun Cluster software.

    • A custom JumpStart directory exists for JumpStart installation of Sun Cluster. This jumpstart-dir directory must contain a copy of the check(1M) utility and be NFS exported for reading by the JumpStart install server.

    • Each new cluster node is configured as a custom JumpStart install client that uses the custom JumpStart directory set up for Sun Cluster installation.

  7. Create a directory on the JumpStart install server to hold your copy of the Sun Cluster 3.1 CD-ROM, if one does not already exist.

    In the following example, the /export/suncluster directory is created for this purpose.


    # mkdir -m 755 /export/suncluster
    

  8. Copy the Sun Cluster CD-ROM to the JumpStart install server.

    1. Insert the Sun Cluster 3.1 CD-ROM into the CD-ROM drive on the JumpStart install server.

      If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_1 directory.

    2. Change to the /cdrom/suncluster_3_1/SunCluster_3.1/Sol_ver/Tools directory, where ver is 8 (for Solaris 8) or 9 (for Solaris 9 .

      The following example uses the path to the Solaris 8 version of Sun Cluster software.


      # cd /cdrom/suncluster_3_1/SunCluster_3.1/Sol_8/Tools
      

    3. Copy the CD-ROM to a new directory on the JumpStart install server.

      The scinstall command creates the new installation directory as it copies the CD-ROM files. The installation directory name /export/suncluster/sc31 is used here as an example.


      ./scinstall -a /export/suncluster/sc31
      

    4. Eject the CD-ROM.


      # cd /
      # eject cdrom
      

    5. Ensure that the Sun Cluster 3.1 CD-ROM image on the JumpStart install server is NFS exported for reading by the JumpStart install server.

      See “Solaris NFS Environment” in System Administration Guide, Volume 3 or “Managing Network File Systems (Overview)” in System Administration Guide: Resource Management and Network Services and the share(1M) and dfstab(4) man pages for more information about automatic file sharing.

  9. Are you installing a new node to an existing cluster?

  10. Have you added the node to the cluster's authorized-node list?

  11. From the JumpStart install server, start the scinstall(1M) utility.

    The path /export/suncluster/sc31 is used here as an example of the installation directory you created.


    # cd /export/suncluster/sc30/SunCluster_3.1/Sol_ver/Tools
    # ./scinstall
    


    Note –

    In the CD-ROM path, replace ver with 8 (for Solaris 8) or 9 (for Solaris 9)


    Follow these guidelines to use the interactive scinstall utility.

    • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu. If you press Control-D to abort the session after Sun Cluster software is installed, scinstall asks you whether you want it to de-install those packages.

    • Your session answers are stored as defaults for the next time you run this menu option. Default answers display between brackets ([ ]) at the end of the prompt.

  12. From the Main Menu, type 3 (Configure a cluster to be JumpStarted from this install server).

    This option is used to configure customer JumpStart finish scripts. JumpStart uses these finish scripts to install the Sun Cluster software.


     *** Main Menu ***
     
        Please select from one of the following (*) options:
     
            1) Establish a new cluster using this machine as the first node
            2) Add this machine as a node in an established cluster
          * 3) Configure a cluster to be JumpStarted from this install server
            4) Add support for new data services to this cluster node
            5) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
     
        Option:  3
     
     *** Custom JumpStart ***
    ...
        Do you want to continue (yes/no) [yes]? 


    Note –

    If option 3 does not have an asterisk in front, the option is disabled because JumpStart setup is not complete or has an error. Exit the scinstall utility, repeat Step 6 through Step 8 to correct JumpStart setup, then restart the scinstall utility.


  13. Specify the JumpStart directory name.


     >>> Custom JumpStart Directory <<<
    ....
        What is your JumpStart directory name?  jumpstart-dir
    

  14. Specify the name of the cluster.


     >>> Cluster Name <<<
    ...
        What is the name of the cluster you want to establish?  clustername
    

  15. Specify the names of all cluster nodes.


     >>> Cluster Nodes <<<
    ...
        Please list the names of all cluster nodes planned for the initial
        cluster configuration. You must enter at least two nodes. List one
        node name per line. When finished, type Control-D:
     
        Node name:  node1
        Node name:  node2
        Node name (Ctrl-D to finish): <Control-D>
     
        This is the complete list of nodes:
    ... 
        Is it correct (yes/no) [yes]? 

  16. Specify whether to use data encryption standard (DES) authentication.

    By default, Sun Cluster software permits a node to connect to the cluster only if the node is physically connected to the private interconnect and if the node name was specified in Step 15. However, the node actually communicates with the sponsoring node over the public network, since the private interconnect is not yet fully configured. DES authentication provides an additional level of security at installation time by enabling the sponsoring node to more reliably authenticate nodes that attempt to contact it to update the cluster configuration.

    If you choose to use DES authentication for additional security, you must configure all necessary encryption keys before any node can join the cluster. See the keyserv(1M) and publickey(4) man pages for details.


     >>> Authenticating Requests to Add Nodes <<<
    ...
        Do you need to use DES authentication (yes/no) [no]? 

  17. Specify the private network address and netmask.


     >>> Network Address for the Cluster Transport <<<
    ...
        Is it okay to accept the default network address (yes/no) [yes]? 
        Is it okay to accept the default netmask (yes/no) [yes]? 


    Note –

    You cannot change the private network address after the cluster is successfully formed.


  18. Specify whether the cluster uses transport junctions.

    • If this is a two-node cluster, specify whether you intend to use transport junctions.


       >>> Point-to-Point Cables <<<
       ...
          Does this two-node cluster use transport junctions (yes/no) [yes]? 


      Tip –

      You can specify that the cluster uses transport junctions, regardless of whether the nodes are directly connected to each other. If you specify that the cluster uses transport junctions, you can more easily add new nodes to the cluster in the future.


    • If this cluster has three or more nodes, you must use transport junctions. Press Return to continue to the next screen.


       >>> Point-to-Point Cables <<<
       ...
          Since this is not a two-node cluster, you will be asked to configure
          two transport junctions.
          
      Hit ENTER to continue: 

  19. Does this cluster use transport junctions?

    • If yes, specify names for the transport junctions. You can use the default names switchN or create your own names.


       >>> Cluster Transport Junctions <<<
       ...
          What is the name of the first junction in the cluster [switch1]? 
          What is the name of the second junction in the cluster [switch2]? 

    • If no, go to Step 20.

  20. Specify the first cluster interconnect transport adapter of the first node.


     >>> Cluster Transport Adapters and Cables <<<
    ...
     For node "node1",
        What is the name of the first cluster transport adapter?  adapter
    

  21. Specify the connection endpoint of the first adapter.

    • If the cluster does not use transport junctions, specify the name of the adapter on the second node to which this adapter connects.


      ...
          Name of adapter on "node2" to which "adapter" is connected? adapter
      

    • If the cluster uses transport junctions, specify the name of the first transport junction and its port.


      ...
       For node "node1",
          Name of the junction to which "adapter" is connected? switch
      ...
       For node "node1",
          Use the default port name for the "adapter" connection (yes/no) [yes]? 


      Note –

      If your configuration uses SCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) found on the Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the Dolphin switch port name 0.


      ...
          Use the default port name for the "adapter" connection 
      (yes/no) [yes]?  no
          What is the name of the port you want to use?  0
      


  22. Specify the second cluster interconnect transport adapter of the first node.


    ...
     For node "node1",
        What is the name of the second cluster transport adapter?  adapter
    

  23. Specify the connection endpoint of the second adapter.

    • If the cluster does not use transport junctions, specify the name of the adapter on the second node to which this adapter connects.


      ...
          Name of adapter on "node2" to which "adapter" is connected? adapter
      

    • If the cluster uses transport junctions, specify the name of the second transport junction and its port.


      ...
       For node "node1",
          Name of the junction to which "adapter" is connected? switch
      ...
       For node "node1",
          Use the default port name for the "adapter" connection (yes/no) [yes]? 


      Note –

      If your configuration uses SCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) found on the Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the Dolphin switch port name 0.


      ...
          Use the default port name for the "adapter" connection 
      (yes/no) [yes]?  no
          What is the name of the port you want to use?  0
      


  24. Does this cluster use transport junctions?

  25. Specify the global devices file system name for each cluster node.


     >>> Global Devices File System <<<
    ...
        The default is to use /globaldevices.
     
     For node "node1",
        Is it okay to use this default (yes/no) [yes]? 
     
     For node "node2",
        Is it okay to use this default (yes/no) [yes]? 

  26. Accept or decline the generated scinstall commands.

    The scinstall command generated from your input is displayed for confirmation.


     >>> Confirmation <<<
     
        Your responses indicate the following options to scinstall:
    -----------------------------------------
     For node "node1",
          scinstall -c jumpstart-dir -h node1  \
    ...
        Are these the options you want to use (yes/no) [yes]? 
    -----------------------------------------
     For node "node2",
          scinstall -c jumpstart-dir -h node2  \
    ...
        Are these the options you want to use (yes/no) [yes]? 
    -----------------------------------------
        Do you want to continue with JumpStart set up (yes/no) [yes]? 

    If you do not accept the generated commands, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 3 and provide different answers. Your previous answers display as the defaults.

  27. If necessary, make adjustments to the default class file, or profile, created by scinstall.

    The scinstall command creates the following autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.1 directory.


    install_type    initial_install
    system_type     standalone
    partitioning    explicit
    filesys         rootdisk.s0 free /
    filesys         rootdisk.s1 750  swap
    filesys         rootdisk.s3 512  /globaldevices
    filesys         rootdisk.s7 20
    cluster         SUNWCuser        add
    package         SUNWman          add

    The default class file installs the End User System Support software group (SUNWCuser) of Solaris software. If your configuration has additional Solaris software requirements, change the class file accordingly. See Solaris Software Group Considerations for more information.

    You can change the profile in one of the following ways.

    • Edit the autoscinstall.class file directly. These changes are applied to all nodes in all clusters that use this custom JumpStart directory.

    • Update the rules file to point to other profiles, then run the check utility to validate the rules file.

    As long as the Solaris operating environment install profile meets minimum Sun Cluster file system allocation requirements, there are no restrictions on other changes to the install profile. See System Disk Partitions for partitioning guidelines and requirements to support Sun Cluster 3.1 software. For more information about JumpStart profiles, see “Preparing Custom JumpStart Installations” in Solaris 8 Advanced Installation Guide or “Preparing Custom JumpStart Installations (Tasks)” in Solaris 9 Installation Guide.

  28. Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?

    • If yes and you install the End User System Support software group, add the following entries to the default class file as described in Step 27.


      package         SUNWrsm         add
      package         SUNWrsmx        add
      package         SUNWrsmo        add
      package         SUNWrsmox       add

      In addition, you must create or modify a post-installation finish script at Step 33 to install the Sun Cluster packages to support the RSMAPI and SCI-PCI adapters.

      If you install a higher software group than End User System Support, the SUNWrsm* packages are installed with the Solaris software and do not need to be added to the class file.

    • If no, go to Step 29.

  29. Do you intend to use SunPlex Manager?

    • If yes and you install the End User System Support software group, add the following entries to the default class file as described in Step 27.


      package         SUNWapchr       add
      package         SUNWapchu       add

      If you install a higher software group than End User System Support, the SUNWrsm* packages are installed with the Solaris software and do not need to be added to the class file.

    • If no, go to Step 30.

  30. Set up Solaris patch directories.

    1. Create jumpstart-dir/autoscinstall.d/nodes/node/patches directories on the JumpStart install server.

      Create one directory for each node in the cluster, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared patch directory.


      # mkdir jumpstart-dir/autoscinstall.d/nodes/node/patches
      

    2. Place copies of any Solaris patches into each of these directories.

      Also place copies of any hardware-related patches that must be installed after Solaris software is installed into each of these directories.

  31. Set up files to contain the necessary hostname information locally on each node.

    1. On the JumpStart install server, create files named jumpstart-dir/autoscinstall.d/nodes/node/archive/etc/inet/hosts.

      Create one file for each node, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared hosts file.

    2. Add the following entries into each file.

      • IP address and hostname of the NFS server that holds a copy of the Sun Cluster CD-ROM image. This could be the JumpStart install server or another machine.

      • IP address and hostname of each node in the cluster.

  32. Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?

    • If yes, follow instructions in Step 33 to set up a post-installation finish script to install the following additional packages. Install the appropriate packages from the /cdrom/suncluster_3_1/SunCluster_3.1/Sol_ver/Packages directory of the Sun Cluster 3.1 CD-ROM in the order given in the following table.


      Note –

      In the CD-ROM path, replace ver with 8 (for Solaris 8) or 9 (for Solaris 9)


      Table 2–9 Sun Cluster 3.1 Packages to Support the RSMAPI and SCI-PCI Adapters

      Feature 

      Additional Sun Cluster 3.1 Packages to Install  

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      SUNWsci SUNWscid SUNWscidx

    • If no, go to Step 33 if you intend to add your own post-installation finish script. Otherwise, skip to Step 34.

  33. (Optional) Add your own post-installation finish script.


    Note –

    If you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport, you must modify the finish script into install the Sun Cluster SUNWscrif software package. This package is not automatically installed by scinstall.


    You can add your own finish script, which is run after the standard finish script installed by the scinstall command. See “Preparing Custom JumpStart Installations” in Solaris 8 Advanced Installation Guide or “Preparing Custom JumpStart Installations (Tasks)” in Solaris 9 Installation Guide for information about creating a JumpStart finish script.

    1. Name your finish script finish.

    2. Copy your finish script to the jumpstart-dir/autoscinstall.d/nodes/node directory, one directory for each node in the cluster.

      Alternately, use this naming convention to create symbolic links to a shared finish script.

  34. If you use an administrative console, display a console screen for each node in the cluster.

    If cconsole(1M) is installed and configured on your administrative console, you can use it to display the individual console screens. Otherwise, you must connect to the consoles of each node individually.

  35. From the ok PROM prompt on the console of each node, type the boot net - install command to begin the network JumpStart installation of each node.


    ok boot net - install
    


    Note –

    The dash (-) in the command must be surrounded by a space on each side.


    Sun Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.pid file, where pid is the process ID number of the scinstall instance.


    Note –

    Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. Because the default file is shipped with references to eight nodes, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See How to Configure Network Time Protocol (NTP) for information on how to suppress these messages under otherwise normal cluster conditions.


    When the installation is successfully completed, each node is fully installed as a new cluster node.


    Note –

    The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be reenabled. See the ifconfig(1M) man page for more information about Solaris interface groups.


  36. Are you installing a new node to an existing cluster?

    • If no, go to Step 37.

    • If yes, create mount points on the new node for all existing cluster file systems.

    1. From another, active node of the cluster, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      

    2. On the node you added to the cluster, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if a file system name returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node being added to the cluster.


      Note –

      The mount points become active after you reboot the cluster in Step 39.


    3. Is VERITAS Volume Manager (VxVM) installed on any nodes that are already in the cluster?

      • If yes, ensure that the same vxio number is used on the VxVM-installed nodes and that the vxio number is available for use on each of the nodes that do not have VxVM installed.


        # grep vxio /etc/name_to_major
        vxio NNN
        

        If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node by changing the /etc/name_to_major entry to use a different number.

      • If no, go to Step 37.

  37. Install any Sun Cluster software patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  38. Do you intend to use dynamic reconfiguration on Sun Enterprise 10000 servers?

    • If yes, on each node add the following entry to the /etc/system file.


      set kernel_cage_enable=1

      This entry becomes effective after the next system reboot. See the Sun Cluster 3.1 System Administration Guide for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

    • If no, go to Step 39.

  39. Did you add a new node to an existing cluster, or install Sun Cluster software patches that require you to reboot the entire cluster, or both?

    • If no, reboot the individual node if any patches you installed require a node reboot or if any other changes you made require a reboot to become active.

    • If yes, perform a reconfiguration reboot as instructed in the following steps.

    1. From one node, shut down the cluster.


      # scshutdown
      


      Note –

      Do not reboot the first-installed node of the cluster until after the cluster is shut down.


    2. Reboot each node in the cluster.


      ok boot
      

    Until cluster install mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in install mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down. Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure How to Perform Post-Installation Setup.

  40. Set up the name service look-up order.

    Go to How to Configure the Name Service Switch.