Sun Cluster Software Installation Guide for Solaris OS

Chapter 2 Installing and Configuring Sun Cluster Software

This chapter provides procedures for how to install and configure your cluster. You can also use these procedures to add a new node to an existing cluster. This chapter also provides procedures to uninstall certain cluster software.

The following information and procedures are in this chapter.

Task Map: Installing the Software

The following task map lists the tasks that you perform to install software on multinode or single-node clusters. Complete the procedures in the order that is indicated.

Table 2–1 Task Map: Installing the Software

Task 

Instructions 

1. Plan the layout of your cluster configuration and prepare to install software. 

How to Prepare for Cluster Software Installation

2. (Optional) Install Cluster Control Panel (CCP) software on the administrative console.

How to Install Cluster Control Panel Software on an Administrative Console

3. Install the Solaris OS and Sun Cluster software. Optionally, install Sun StorEdge QFS software. Choose one of the following methods: 

  • Method 1 – (New clusters only) Install Solaris software. Next, install Sun Cluster software on all nodes. Then use the scinstall utility to establish the cluster.

  1. How to Install Solaris Software

  2. How to Install Sun Cluster Software Packages

  3. How to Configure Sun Cluster Software on All Nodes (scinstall)

  • Method 2 – (New clusters only) Install Solaris software. Next, install SunPlexTM Manager software. Then use SunPlex Installer to install Sun Cluster software.

  1. How to Install Solaris Software

  2. Using SunPlex Installer to Install Sun Cluster Software

  • Method 3 – (New clusters or added nodes) Install Solaris software and Sun Cluster software in one operation by using the scinstall utility's custom JumpStart option.

How to Install Solaris and Sun Cluster Software (JumpStart)

  • Method 4 – (New single-node clusters) Install Solaris software. Then install Sun Cluster software by using the scinstall -iFo command.

  1. How to Install Solaris Software

  2. How to Install Sun Cluster Software on a Single-Node Cluster

  • Method 5 – (Added nodes only) Install Solaris software on the new nodes. Next, install Sun Cluster software on the new node. Then configure Sun Cluster software on the new node by using the scinstall utility.

  1. How to Install Solaris Software

  2. How to Install Sun Cluster Software Packages

  3. How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall)

4. (Optional) SPARC: Install VERITAS File System software.

SPARC: How to Install VERITAS File System Software

5. Configure the name-service look-up order. 

How to Configure the Name-Service Switch

6. Set up directory paths. 

How to Set Up the Root Environment

7. Install data-service software packages. 

How to Install Data-Service Software Packages (installer) or How to Install Data-Service Software Packages (scinstall)

8. Assign quorum votes and remove the cluster from installation mode, if this operation was not already performed during Sun Cluster installation. 

How to Perform Postinstallation Setup and Configure Quorum Devices

9. Validate the quorum configuration. 

How to Verify the Quorum Configuration and Installation Mode

10. Install and configure volume-manager software: 

  • Install and configure Solstice DiskSuite or Solaris Volume Manager software.

  • SPARC: Install and configure VERITAS Volume Manager software.

11. Configure the cluster. 

Configuring the Cluster

Installing the Software

This section provides information and procedures to install software on the cluster nodes.

How to Prepare for Cluster Software Installation

Before you begin to install software, make the following preparations.

  1. Read the following manuals for information that can help you plan your cluster configuration and prepare your installation strategy.

  2. Have available all related documentation, including third-party documents.

    The following is a partial list of products whose documentation you might need to reference during cluster installation:

    • Solaris OS

    • Solstice DiskSuite or Solaris Volume Manager software

    • Sun StorEdge QFS software

    • SPARC: VERITAS Volume Manager

    • SPARC: Sun Management Center

    • Third-party applications

  3. Plan your cluster configuration.


    Caution – Caution –

    Plan your cluster installation completely. Identify requirements for all data services and third-party products before you begin Solaris and Sun Cluster software installation. Failure to do so might result in installation errors that require that you completely reinstall the Solaris and Sun Cluster software.

    For example, the Oracle Real Application Clusters Guard option of Oracle Real Application Clusters has special requirements for the hostnames that you use in the cluster. Another example with special requirements is Sun Cluster HA for SAP. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software.

    Also note that both Oracle Real Application Clusters and Sun Cluster HA for SAP are not supported for use in x86 based clusters.


  4. Get all necessary patches for your cluster configuration.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

    1. Copy the patches that are required for Sun Cluster into a single directory.

      The directory must be on a file system that is accessible by all nodes. The default patch directory is /var/cluster/patches/.


      Tip –

      After you install Solaris software on a node, you can view the /etc/release file to see the exact version of Solaris software that is installed.


    2. (Optional) If you are not using SunPlex Installer, you can create a patch list file.

      If you specify a patch list file, SunPlex Installer only installs the patches that are listed in the patch list file. For information about creating a patch-list file, refer to the patchadd(1M) man page.

    3. Record the path to the patch directory.

  5. (Optional) Use Cluster Control Panel software to connect from an administrative console to your cluster nodes.

    Go to How to Install Cluster Control Panel Software on an Administrative Console.

  6. Choose the Solaris installation procedure to use.

How to Install Cluster Control Panel Software on an Administrative Console


Note –

You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.


This procedure describes how to install the Cluster Control Panel (CCP) software on an administrative console. The CCP provides a launchpad for the cconsole(1M), ctelnet(1M), and crlogin(1M) tools. Each of these tools provides a multiple-window connection to a set of nodes, as well as a common window. You can use the common window to send input to all nodes at one time.

You can use any desktop machine that runs the Solaris 8 or Solaris 9 OS as an administrative console. In addition, you can also use the administrative console as a documentation server. If you are using Sun Cluster on a SPARC based system, you can use the administrative console as a Sun Management Center console or server as well. See Sun Management Center documentation for information on how to install Sun Management Center software. See the Sun Cluster Release Notes for Solaris OS for additional information on how to install Sun Cluster documentation.

  1. Become superuser on the administrative console.

  2. Ensure that a supported version of the Solaris OS and any Solaris patches are installed on the administrative console.

    All platforms require at least the End User Solaris Software Group.

  3. Insert the Sun Cluster 3.1 9/04 CD-ROM into the CD-ROM drive of the administrative console.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  4. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
    

  5. Start the installer program.


    # ./installer
    

  6. Choose Custom installation.

    The utility displays a list of software packages.

  7. If necessary, deselect any packages that you do not want to install on the administrative console.

  8. Choose the menu item, Sun Cluster cconsole package.

  9. (Optional) Choose the menu item, Sun Cluster manpage package.

  10. (Optional) Choose the menu item, Sun Cluster documentation packages.

  11. Follow onscreen instructions to continue package installation.

    After installation is finished, you can view any available installation log.

  12. Install the SUNWccon package.


    # pkgadd -d . SUNWccon
    

  13. (Optional) Install the SUNWscman package.


    # pkgadd -d . SUNWscman
    

    When you install the SUNWscman package on the administrative console, you can view Sun Cluster man pages from the administrative console before you install Sun Cluster software on the cluster nodes.

  14. (Optional) Install the Sun Cluster documentation packages.


    Note –

    If you do not install the documentation on your administrative console, you can still view HTML or PDF documentation directly from the CD-ROM. Use a web browser to view the index.html file at the top level of the CD-ROM.


    1. Start the pkgadd utility in interactive mode.


      # pkgadd -d .
      

    2. Choose the Documentation Navigation for Solaris 9 package, if it has not already been installed on the administrative console.

    3. Choose the Sun Cluster documentation packages to install.

      The following documentation collections are available in both HTML and PDF format:

      • Sun Cluster 3.1 9/04 Software Collection for Solaris OS (SPARC Platform Edition)

      • Sun Cluster 3.1 9/04 Software Collection for Solaris OS (x86 Platform Edition)

      • Sun Cluster 3.x Hardware Collection for Solaris OS (SPARC Platform Edition)

      • Sun Cluster 3.x Hardware Collection for Solaris OS (x86 Platform Edition)

      • Sun Cluster 3.1 9/04 Reference Collection for Solaris OS

    4. Follow onscreen instructions to continue package installation.

  15. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    

  16. Create an /etc/clusters file on the administrative console.

    Add your cluster name and the physical node name of each cluster node to the file.


    # vi /etc/clusters
    clustername node1 node2
    

    See the /opt/SUNWcluster/bin/clusters(4) man page for details.

  17. Create an /etc/serialports file.

    Add an entry for each node in the cluster to the file. Specify the physical node name, the hostname of the console-access device, and the port number. Examples of a console-access device are a terminal concentrator (TC), a System Service Processor (SSP), and a Sun Fire system controller.


    # vi /etc/serialports
    node1 ca-dev-hostname port
    node2 ca-dev-hostname port
    
    node1, node2

    Physical names of the cluster nodes

    ca-dev-hostname

    Hostname of the console-access device

    port

    Serial port number

    Note these special instructions to create an /etc/serialports file:

    • For a Sun Fire 15000 system controller, use telnet(1) port number 23 for the serial port number of each entry.

    • For all other console-access devices, use the telnet serial port number, not the physical port number. To determine the telnet serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the telnet serial port number is 5006.

    • For Sun Enterprise 10000 servers, also see the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations.

  18. (Optional) For convenience, set the directory paths on the administrative console.

    • Add the /opt/SUNWcluster/bin/ directory to the PATH.

    • Add the /opt/SUNWcluster/man/ directory to the MANPATH.

    • If you installed the SUNWscman package, also add the /usr/cluster/man/ directory to the MANPATH.

  19. Start the CCP utility.


    # /opt/SUNWcluster/bin/ccp &
    

    Click the cconsole, crlogin, or ctelnet button in the CCP window to launch that tool. Alternately, you can start any of these tools directly. For example, to start ctelnet, type the following command:


    # /opt/SUNWcluster/bin/ctelnet &
    

    See the procedure “How to Remotely Log In to Sun Cluster” in “Beginning to Administer the Cluster” in Sun Cluster System Administration Guide for Solaris OS for additional information about how to use the CCP utility. Also see the ccp(1M) man page.

  20. Determine whether the Solaris OS is already installed on each cluster node to meet Sun Cluster software requirements.

How to Install Solaris Software

If you do not use the scinstall(1M) custom JumpStart installation method to install software, perform this task. Follow these procedures to install the Solaris OS on each node in the cluster.


Tip –

To speed installation, you can install the Solaris OS on each node at the same time.


If your nodes are already installed with the Solaris OS but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Sun Cluster software. See Planning the Solaris OS for information about required root-disk partitioning and other Sun Cluster installation requirements.

  1. Ensure that the hardware setup is complete and that connections are verified before you install Solaris software.

    See the Sun Cluster Hardware Administration Collection and your server and storage device documentation for details.

  2. Ensure that your cluster configuration planning is complete.

    See How to Prepare for Cluster Software Installation for requirements and guidelines.

  3. Have available your completed Local File System Layout Worksheet.

  4. If you use a naming service, add address-to-name mappings for all public hostnames and logical addresses to any naming services that clients use for access to cluster services. You set up local hostname information in Step 11.

    See IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.

  5. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time. Use the following command to start cconsole:


      # /opt/SUNWcluster/bin/cconsole clustername &
      

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  6. Install the Solaris OS as instructed in your Solaris installation documentation.


    Note –

    You must install all nodes in a cluster with the same version of the Solaris OS.


    You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:

    1. Install at least the End User Solaris Software Group.

      See Solaris Software Group Considerations for information about additional Solaris software requirements.

    2. Choose Manual Layout to set up the file systems.

      • Create a file system of at least 512 Mbytes for use by the global-device subsystem. If you intend to use SunPlex Installer to install Sun Cluster software, you must create the file system with a mount-point name of /globaldevices. The /globaldevices mount-point name is the default that is used by scinstall.


        Note –

        Sun Cluster software requires a global-devices file system for installation to succeed.


      • Specify that slice 7 is at least 20 Mbytes in size. If you intend to use SunPlex Installer to install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9), also make this file system mounted on /sds.

      • Create any other file-system partitions that you need, as described in System Disk Partitions.


        Note –

        If you intend to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, you must also install Solstice DiskSuite software (Solaris 8) or configure Solaris Volume Manager software (Solaris 9).


    3. For ease of administration, set the same root password on each node.

  7. If you are adding a node to an existing cluster, prepare the cluster to accept the new node.

    1. On any active cluster member, start the scsetup(1M) utility.


      # scsetup
      

      The Main Menu is displayed.

    2. Choose the menu item, New nodes.

    3. Choose the menu item, Specify the name of a machine which may add itself.

    4. Follow the prompts to add the node's name to the list of recognized machines.

      The scsetup utility prints the message Command completed successfully if the task completes without error.

    5. Quit the scsetup utility.

    6. From the active cluster node, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      

    7. On the new node, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

    8. Determine whether VERITAS Volume Manager (VxVM) is installed on any nodes that are already in the cluster.

    9. If VxVM is installed on any existing cluster nodes, ensure that the same vxio number is used on the VxVM-installed nodes. Also ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.


      # grep vxio /etc/name_to_major
      vxio NNN
      

      If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.

  8. If you installed the End User Solaris Software Group, use the pkgadd command to manually install any additional Solaris software packages that you might need.

    The following Solaris packages are required to support some Sun Cluster functionality.

    Feature 

    Required Solaris Software Packages (shown in installation order) 

    RSMAPI, RSMRDT drivers, or SCI-PCI adapters (SPARC based clusters only) 

    SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox

    SunPlex Manager 

    SUNWapchr SUNWapchu

  9. Install any hardware-related patches. Also download any needed firmware that is contained in the hardware patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  10. x86: Set the default boot file to kadb.


    # eeprom boot-file=kadb
    

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

  11. Update the /etc/inet/hosts file on each node with all public hostnames and logical addresses for the cluster.

    Perform this step regardless of whether you are using a naming service.

  12. (Optional) On Sun Enterprise 10000 servers, configure the /etc/system file to use dynamic reconfiguration.

    Add the following entry to the /etc/system file on each node of the cluster:


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot.

    See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

  13. Install Sun Cluster software packages.

    Go to How to Install Sun Cluster Software Packages.

How to Install Sun Cluster Software Packages


Note –

If you enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser to all cluster nodes, you do not have to perform this procedure. Instead, go to How to Configure Sun Cluster Software on All Nodes (scinstall). In that procedure, the scinstall(1M) utility automatically installs Sun Cluster framework software on all cluster nodes.

However, if you need to install any Sun Cluster software packages in addition to the framework software, install those packages from the Sun Cluster 3.1 9/04 CD-ROM. Do this task before you start the scinstall utility. You can install those additional Sun Cluster packages by using the pkgadd(1M) command or by using the installer(1M) program as described in the following procedure.


Perform this procedure on each node in the cluster to install the Sun Cluster software packages.

  1. Ensure that the Solaris OS is installed to support Sun Cluster software.

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

  2. Become superuser on the cluster node to install.

  3. Install Sun Web Console packages.

    These packages are required by Sun Cluster software, even if you do not use Sun Web Console.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM in the CD-ROM drive.

    2. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_web_console/2.1/ directory, where arch is sparc or x86.

    3. Run the setup command.


      # ./setup
      

      The setup command installs all packages to support Sun Web Console.

  4. (Optional) To use the installer program with a GUI, ensure that the DISPLAY environment variable is set.

  5. Change to the root directory of the CD-ROM, where the installer program resides.


    # cd /cdrom/cdrom0/
    

  6. Start the installer program.


    # ./installer
    

  7. Choose Typical or Custom installation.

    • Choose Typical to install the default set of Sun Cluster framework software packages.

    • Choose Custom to specify additional Sun Cluster software packages to install, such as packages that support other languages, the RSMAPI, and SCI-PCI adapters.

  8. Follow instructions on the screen to install Sun Cluster software on the node.

    After installation is finished, you can view any available installation log.

  9. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    

  10. Configure Sun Cluster software on the cluster nodes.

How to Configure Sun Cluster Software on All Nodes (scinstall)

Perform this procedure from one node of the cluster to configure Sun Cluster software on all nodes of the cluster.

  1. Ensure that the Solaris OS is installed to support Sun Cluster software.

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

  2. If you disabled remote configuration during Sun Cluster software installation, re-enable remote configuration.

    Enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser to all cluster nodes.

  3. (Optional) To use the scinstall(1M) utility to install patches, download patches to a patch directory.

    If you use Typical mode to install the cluster, use a directory named either /var/cluster/patches/ or /var/patches/ to contain the patches to install. In Typical mode, the scinstall command checks both those directories for patches.

    • If neither of those directories exist, then no patches are added.

    • If both directories exist, then only the patches in the /var/cluster/patches/ directory are added.

    If you use Custom mode to install the cluster, you specify the path to the patch directory, so there is no requirement to use the patch directories that scinstall checks for in Typical mode.

    You can include a patch-list file in the patch directory. The default patch-list file name is patchlist. For information about creating a patch-list file, refer to the patchadd(1M) man page.

  4. Complete one of the following configuration worksheets:

    See Planning the Sun Cluster Environment for planning guidelines.

    Table 2–2 Interactive scinstall Configuration Worksheet (Typical)

    Component 

    Description/Example 

    Enter Answers Here 

    Cluster Name 

    What is the name of the cluster that you want to establish? 

     

    Cluster Nodes 

    What are the names of the other cluster nodes planned for the initial cluster configuration? 

     

    Cluster-Transport Adapters and Cables 

    What are the names of the two cluster-transport adapters that attach the node to the private interconnect? 

    First

      

    Second

      

    Check 

    Do you want to interrupt installation for sccheck errors? (sccheck verifies that preconfiguration requirements are met)

    Yes  |  No 

    For the Typical configuration of Sun Cluster software, the scinstall utility automatically specifies the following defaults.

    Component 

    Default Value 

    Private-network address 

    172.16.0.0

    Private-network netmask 

    255.255.0.0

    Cluster-transport junctions 

    switch1 and switch2

    Global-devices file-system name 

    /globaldevices

    Installation security (DES) 

    Limited 

    Solaris and Sun Cluster patch directory 

    /var/cluster/patches/

    Table 2–3 Interactive scinstall Configuration Worksheet (Custom)

    Component 

    Description/Example 

    Enter Answers Here 

    Cluster Name 

    What is the name of the cluster that you want to establish? 

     

    Cluster Nodes 

    What are the names of the other cluster nodes planned for the initial cluster configuration? 

     

    DES Authentication 

    Do you need to use DES authentication? 

    No  |  Yes  

    Network Address for the Cluster Transport 

    Do you want to accept the default network address (172.16.0.0)?

    Yes   |  No  

    If no, supply your own network address: 

    _____ . _____.0.0

    Do you want to accept the default netmask (255.255.0.0)?

    Yes   |  No  

    If no, supply your own netmask: 

    255.255. ___ . ___

    Point-to-Point Cables 

    If this is a two-node cluster, does this cluster use transport junctions? 

    Yes  |  No 

    Cluster-Transport Junctions 

    If used, what are the names of the two transport junctions? 

      Defaults: switch1 and switch2


    First

      

    Second

      

    Cluster-Transport Adapters and Cables 

    Node name (the node from which you run scinstall):

     

    Transport adapters: 

    First

      

    Second

      

    Where does each transport adapter connect to (a transport junction or another adapter)?

      Junction defaults: switch1 and switch2


      

    For transport junctions, do you want to use the default port name?  

    Yes | No 

    Yes | No 

    If no, what is the name of the port that you want to use? 

      

    Do you want to use autodiscovery to list the available adapters for the other nodes? 

    If no, supply the following information for each additional node: 

    Yes  |  No 

    Specify for each additional node

    Node name: 

     

    Transport adapters: 

    First

      

    Second

      

    Where does each transport adapter connect to (a transport junction or another adapter)?

      Defaults: switch1 and switch2


      

    For transport junctions, do you want to use the default port name? 

    Yes | No 

    Yes | No 

    If no, what is the name of the port that you want to use? 

      

    Software Patch Installation 

    Do you want scinstall to install patches for you?

    Yes  |  No 

    If yes, what is the name of the patch directory? 

     

    Do you want to use a patch list? 

    Yes  |  No 

    Global-Devices File System 

    (specify for each node)

    Do you want to use the default name of the global-devices file system (/globaldevices)?

    Yes  |  No 

    If no, do you want to use an already-existing file system? 

    Yes  |  No 

    What is the name of the file system that you want to use? 

     

    Check 

    Do you want to interrupt installation for sccheck errors? (sccheck verifies that preconfiguration requirements are met)

    Yes  |  No 


    Note –

    You cannot change the private-network address and netmask after scinstall processing is finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then perform the procedures in How to Install Sun Cluster Software Packages and in this procedure to reinstall the software and configure the node with the correct information.


  5. Become superuser on the cluster node from which you intend to configure the cluster.

  6. Install additional packages if you intend to use any of the following features.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • SCI-PCI adapters for the interconnect transport

    • RSMRDT drivers


    Note –

    Use of the RSMRDT driver is restricted to clusters that run an Oracle9i release 2 SCI configuration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed installation and configuration instructions.


    1. Determine which packages you must install.

      The following table lists the Sun Cluster 3.1 9/04 packages that each feature requires and the order in which you must install each group of packages. The installer program does not automatically install these packages.

      Feature 

      Additional Sun Cluster 3.1 9/04 Packages to Install  

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      SUNWsci SUNWscid SUNWscidx

      RSMRDT drivers 

      SUNWscrdt

    2. Ensure that any dependency Solaris packages are already installed.

      See Step 8 in How to Install Solaris Software.

    3. Insert the Sun Cluster 3.1 9/04 CD-ROM into the CD-ROM drive of a node.

    4. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
      

    5. Install the additional packages.


      # pkgadd -d . packages
      

    6. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

    7. Repeat for each additional node in the cluster.

  7. On one node, start the scinstall utility.


    # /usr/cluster/bin/scinstall
    

  8. Follow these guidelines to use the interactive scinstall utility:

    • Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

    • Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

  9. From the Main Menu, choose the menu item, Install a cluster or cluster node.


     *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
          * 4) Print release information for this cluster node
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    

  10. From the Install Menu, choose the menu item, Install all nodes of a new cluster.

  11. From the Type of Installation menu, choose either Typical or Custom.

  12. Follow the menu prompts to supply your answers from the worksheet that you completed in Step 4.

    The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  13. Install Sun StorEdge QFS file system software.

    Follow the procedures for initial installation in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

  14. (Optional) SPARC: To install VERITAS File System, go to SPARC: How to Install VERITAS File System Software.

  15. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

Example – Configuring Sun Cluster Software on All Nodes

The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on a two-node cluster. The cluster node names are phys-schost-1 and phys-schost-2. The specified adapter names are qfe2 and hme2.


  Installation and Configuration

    Log file - /var/cluster/logs/install/scinstall.log.834

    Testing for "/globaldevices" on "phys-schost-1" ... done
    Testing for "/globaldevices" on "phys-schost-2" ... done

    Checking installation status ... done

    The Sun Cluster software is already installed on "phys-schost-1".
    The Sun Cluster software is already installed on "phys-schost-2".

    Starting discovery of the cluster transport configuration.

    Probing ..

    The following connections were discovered:

        phys-schost-1:qfe2  switch1  phys-schost-2:qfe2
        phys-schost-1:hme2  switch2  phys-schost-2:hme2

    Completed discovery of the cluster transport configuration.

    Started sccheck on "phys-schost-1".
    Started sccheck on "phys-schost-2".

    sccheck completed with no errors or warnings for "phys-schost-1".
    sccheck completed with no errors or warnings for "phys-schost-2".

    Configuring "phys-schost-2" ... done
    Rebooting "phys-schost-2" ... done

    Configuring "phys-schost-1" ... done
    Rebooting "phys-schost-1" ... 

Log file - /var/cluster/logs/install/scinstall.log.834

Rebooting ... 

Using SunPlex Installer to Install Sun Cluster Software


Note –

To add a new node to an existing cluster, instead follow the procedures in How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall).


This section describes how to install SunPlex Manager software. This section also describes how to use SunPlex Installer, the installation module of SunPlex Manager, to install Sun Cluster software and to establish new cluster nodes. You can also use SunPlex Installer to install or configure one or more of the following additional software products:

Installation Requirements

The following table lists SunPlex Installer installation requirements for these additional software products.

Table 2–4 Requirements to Use SunPlex Installer to Install Software

Software Package 

Installation Requirements 

Solstice DiskSuite or Solaris Volume Manager 

A partition that uses /sds as the mount–point name. The partition must be at least 20 Mbytes in size.

Sun Cluster HA for NFS data service 

  • At least two shared disks, of the same size, that are connected to the same set of nodes.

  • Solstice DiskSuite software installed, or Solaris Volume Manager software configured, by SunPlex Installer.

  • A logical hostname for use by Sun Cluster HA for NFS. The logical hostname must have a valid IP address that is accessible by all cluster nodes. The IP address must be on the same subnet as the base hostnames of the cluster nodes.

  • A test IP address for each node of the cluster. SunPlex Installer uses these test IP addresses to create Internet Protocol (IP) Network Multipathing (IP Network Multipathing) groups for use by Sun Cluster HA for NFS.

Sun Cluster HA for Apache scalable data service 

  • At least two shared disks of the same size that are connected to the same set of nodes.

  • Solstice DiskSuite software installed, or Solaris Volume Manager software configured, by SunPlex Installer.

  • A shared address for use by Sun Cluster HA for Apache. The shared address must have a valid IP address that is accessible by all cluster nodes. The IP address must be on the same subnet as the base hostnames of the cluster nodes.

  • A test IP address for each node of the cluster. SunPlex Installer uses these test IP addresses to create Internet Protocol (IP) Network Multipathing (IP Network Multipathing) groups for use by Sun Cluster HA for Apache.

Test IP Addresses

The test IP addresses that you supply must meet the following requirements:

The following table lists each metaset name and cluster-file-system mount point that is created by SunPlex Installer. The number of metasets and mount points that SunPlex Installer creates depends on the number of shared disks that are connected to the node. For example, if a node is connected to four shared disks, SunPlex Installer creates the mirror-1 and mirror-2 metasets. However, SunPlex Installer does not create the mirror-3 metaset, because the node does not have enough shared disks to create a third metaset.

Table 2–5 Metasets Installed by SunPlex Installer

Shared Disks 

Metaset Name 

Cluster File System Mount Point 

Purpose 

First pair 

mirror-1

/global/mirror-1

Sun Cluster HA for NFS or Sun Cluster HA for Apache scalable data service, or both 

Second pair 

mirror-2

/global/mirror-2

Unused 

Third pair 

mirror-3

/global/mirror-3

Unused 


Note –

If the cluster does not meet the minimum shared-disk requirement, SunPlex Installer still installs the Solstice DiskSuite packages. However, without sufficient shared disks, SunPlex Installer cannot configure the metasets, metadevices, or volumes. SunPlex Installer then cannot configure the cluster file systems that are needed to create instances of the data service.


Character-Set Limitations

SunPlex Installer recognizes a limited character set to increase security. Characters that are not a part of the set are silently filtered out when HTML forms are submitted to the SunPlex Installer server. The following characters are accepted by SunPlex Installer:


()+,-./0-9:=@A-Z^_a-z{|}~

This filter can cause problems in the following two areas:

How to Install SunPlex Manager Software

This procedure describes how to install SunPlex Manager software on your cluster.

Perform this procedure on each node of the cluster.

  1. Ensure that Solaris software and patches are installed on each node of the cluster.

    You must install Solaris software as described in How to Install Solaris Software. Or, if Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software. You must also ensure that the installation meets the requirements for any other software that you intend to install on the cluster.

  2. Review the requirements and guidelines in Using SunPlex Installer to Install Sun Cluster Software.

  3. x86: Determine whether you are using the Netscape NavigatorTM browser or the Microsoft Internet Explorer browser on your administrative console.

    • If you are using Netscape Navigator, go to Step 4.

    • If you are using Internet Explorer, go to Step 5.

  4. x86: Ensure that the Java plug-in is installed and working on your administrative console.

    1. Start the Netscape Navigator browser on the administrative console that you use to connect to the cluster.

    2. From the Help menu, choose About Plug-ins.

    3. Determine whether the Java plug-in is listed.

    4. Download the latest Java plug-in from http://java.sun.com/products/plugin.

    5. Install the plug-in on your administrative console.

    6. Create a symbolic link to the plug-in.


      % cd ~/.netscape/plugins/
      % ln -s /usr/j2se/plugin/i386/ns4/javaplugin.so .
      

    7. Skip to Step 6.

  5. x86: Ensure that Java 2 Platform, Standard Edition (J2SE) for Windows is installed and working on your administrative console.

    1. On your Microsoft Windows desktop, click Start, point to Settings, and then select Control Panel.

      The Control Panel window appears.

    2. Determine whether the Java Plug-in is listed.

      • If no, proceed to Step c.

      • If yes, double-click the Java Plug-in control panel. When the control panel window opens, click the About tab.

        • If version 1.4.1 or a later version is shown, skip to Step 6.

        • If an earlier version is shown, proceed to Step c.

    3. Download the latest version of J2SE for Windows from http://java.sun.com/j2se/downloads.html.

    4. Install the J2SE for Windows software on your administrative console.

    5. Restart the system on which your administrative console runs.

      The J2SE for Windows control panel is activated.

  6. Become superuser on a cluster node.

  7. Ensure that Apache software packages are installed on the node.


    # pkginfo SUNWapchr SUNWapchu SUNWapchd
    

    If necessary, install any missing Apache software packages by performing the following steps.

    1. Insert the Solaris 8 or Solaris 9 Software 2 of 2 CD-ROM into the CD-ROM drive of the node.

      If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

    2. Change to the Product/ directory.

      • For Solaris 8, change to the /cdrom/sol_8_sparc/Solaris_8/Product/ directory.


        # cd /cdrom/sol_8_sparc/Solaris_8/Product/
        

      • For Solaris 9, change to the /cdrom/cdrom0/Solaris_9/Product/ directory.


        # cd /cdrom/cdrom0/Solaris_9/Product/
        

    3. Install the Apache software packages in the order that is shown in this step.


      # pkgadd -d . SUNWapchr SUNWapchu SUNWapchd
      

    4. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

    5. Install any Apache software patches.

      See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  8. If not already installed, install the Java Dynamic Management Kit (JDMK) packages.

    These packages are required by Sun Cluster software.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM.

    2. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      phys-schost-1# cd Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
      

    3. Install the JDMK packages.


      phys-schost-1# pkgadd -d . SUNWjdmk*
      

    4. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

  9. If not already installed, install the security files for the common agent container.

    Perform the following steps to ensure that the common agent container security files are identical on all cluster nodes and that the copied files retain the correct file permissions. These files are required by Sun Cluster software.

    1. On all cluster nodes, stop the security file agent for the common agent container.


      # /opt/SUNWcacao/bin/cacaoadm stop
      

    2. On one node of the cluster, insert the Sun Cluster 3.1 9/04 CD-ROM.

    3. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      phys-schost-1# cd Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
      

    4. Install the common agent container packages.


      phys-schost-1# pkgadd -d . SUNWcacao*
      

    5. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

    6. Change to the /etc/opt/SUNWcacao/ directory.


      phys-schost-1# cd /etc/opt/SUNWcacao/
      

    7. Create a tar file of the /etc/opt/SUNWcacao/security/ directory.


      phys-schost-1# tar cf /tmp/SECURITY.tar security
      

    8. Copy the /tmp/SECURITY.tar file to each of the other cluster nodes.

    9. On each node to which you copied the/tmp/SECURITY.tar file, extract the security files.

      Any security files that already exist in the /etc/opt/SUNWcacao/ directory are overwritten.


      phys-schost-2# cd /etc/opt/SUNWcacao/
      phys-schost-2# tar xf /tmp/SECURITY.tar
      

    10. Delete the /tmp/SECURITY.tar file from each node in the cluster.

      You must delete each copy of the tar file to avoid security risks.


      phys-schost-1# rm /tmp/SECURITY.tar
      phys-schost-2# rm /tmp/SECURITY.tar
      

    11. On all nodes, restart the security file agent.


      phys-schost-1# /opt/SUNWcacao/bin/cacaoadm start
      

  10. Install Sun Web Console packages.

    These packages are required by Sun Cluster software, even if you do not use Sun Web Console.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM in the CD-ROM drive.

    2. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_web_console/2.1/ directory, where arch is sparc or x86.

    3. Run the setup command.


      # ./setup
      

      The setup command installs all packages to support Sun Web Console.

  11. Install the SunPlex Manager software packages.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM into the CD-ROM drive of the node.

    2. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
      

    3. Install the SunPlex Manager software packages.


      # pkgadd -d . SUNWscva SUNWscspm SUNWscspmu SUNWscspmr
      

    4. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

  12. Use SunPlex Installer to install and configure Sun Cluster software.

    Go to How to Install and Configure Sun Cluster Software (SunPlex Installer).

How to Install and Configure Sun Cluster Software (SunPlex Installer)


Note –

To add a new node to an existing cluster, instead follow the procedures in How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall).


Perform this procedure to use SunPlex Installer to install and configure Sun Cluster software and patches on all nodes in the cluster in a single operation. In addition, you can use this procedure to install Solstice DiskSuite software and patches (Solaris 8) or to configure Solaris Volume Manager mirrored disk sets (Solaris 9).

If you use SunPlex Installer to install Solstice DiskSuite software or to configure Solaris Volume Manager disk sets, you can also install one or both of the following data services:

The installation process might take from 30 minutes to two or more hours. The actual length of time depends on the number of nodes that are in the cluster, your choice of data services to install, and the number of disks that are in your cluster configuration.

  1. Ensure that the cluster configuration meets the requirements to use SunPlex Installer to install software.

    See Using SunPlex Installer to Install Sun Cluster Software for installation requirements and restrictions.

  2. Ensure that the root password is the same on every node of the cluster.

    To use the root password to access SunPlex Installer or SunPlex Manager, the root password must be the same on all nodes in the cluster.

    If some nodes have a different root password than other nodes, set the root password to the same value on each node of the cluster. If necessary, also use the chkey command to update the RPC key pair. See the chkey(1) man page.


    # passwd
    Enter new password
    # chkey -p
    

  3. If you intend to install Sun Cluster HA for NFS or Sun Cluster HA for Apache, ensure that the cluster configuration meets all applicable requirements.

    See Using SunPlex Installer to Install Sun Cluster Software.

  4. Ensure that SunPlex Manager software is installed on each node of the cluster.

    See the installation procedures in How to Install SunPlex Manager Software.

  5. Prepare file-system paths to a CD-ROM image of each software product that you intend to install.

    Follow these guidelines to prepare the file-system paths:

    • Provide each CD-ROM image in a location that is available to each node.

    • Ensure that the CD-ROM images are accessible to all nodes of the cluster from the same file-system path. These paths can be one or more of the following locations:

      • CD-ROM drives that are exported to the network from machines outside the cluster.

      • Exported file systems on machines outside the cluster.

      • CD-ROM images that are copied to local file systems on each node of the cluster. The local file system must use the same name on each node.

  6. Install additional packages if you intend to use one or more of the following features.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • SCI-PCI adapters for the interconnect transport

    • RSMRDT drivers


    Note –

    Use of the RSMRDT driver is restricted to clusters that run an Oracle9i release 2 SCI configuration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed installation and configuration instructions.


    1. Determine which packages you must install.

      The following table lists the Sun Cluster 3.1 9/04 packages that each feature requires and the order in which you must install each group of packages. SunPlex Installer does not automatically install these packages.

      Feature 

      Additional Sun Cluster 3.1 9/04 Packages to Install  

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      SUNWsci SUNWscid SUNWscidx

      RSMRDT drivers 

      SUNWscrdt

    2. Ensure that any dependency Solaris packages are already installed.

      See Step 8 in How to Install Solaris Software.

    3. Insert the Sun Cluster 3.1 9/04 CD-ROM into the CD-ROM drive of a node.

    4. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
      

    5. Install the additional packages.


      # pkgadd -d . packages
      

    6. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

    7. Repeat for each additional node in the cluster.

  7. If patches exist that are required to support Sun Cluster or Solstice DiskSuite software, determine how to install those patches.

    • To manually install patches, use the pkgadd command to install all patches before you use SunPlex Installer.

    • To use SunPlex Installer to install patches, copy patches into a single directory.

      Ensure that the patch directory meets the following requirements:

      • The patch directory resides on a file system that is available to each node.

      • Only one version of each patch is present in this patch directory.

        If the patch directory contains multiple versions of the same patch, SunPlex Installer cannot determine the correct patch dependency order.

      • The patches are uncompressed.

  8. Complete the following installation worksheet.

    Table 2–6 SunPlex Installer Installation and Configuration Worksheet

    Component 

    Description/Example 

    Enter Answers Here 

    Cluster Name 

    What is the name of the cluster that you want to establish? 

     

    How many nodes are you installing in the cluster?  

     

    Node Names 

    What are the names of the cluster nodes? 

     

    Cluster-Transport Adapters and Cables 

    What are the names of the two transport adapters to use, two adapters per node? 

     

    Solstice DiskSuite or Solaris Volume Manager  

    • Solaris 8: Do you want to install Solstice DiskSuite?

    • Solaris 9: Do you want to configure Solaris Volume Manager?

    Yes  |  No 

    Sun Cluster HA for NFS 

    Requires Solstice DiskSuite or Solaris Volume Manager

    Do you want to install Sun Cluster HA for NFS? 

    If yes, also specify the following:  

    Yes  |  No 

    What is the logical hostname that the data service is to use? 

     

    What are the test IP addresses to use?  

    Supply one test IP address for each node in the cluster.

     

    Sun Cluster HA for Apache (scalable) 

    Requires Solstice DiskSuite or Solaris Volume Manager

    Do you want to install scalable Sun Cluster HA for Apache? 

    If yes, also specify the following:  

    Yes  |  No 

    What is the logical hostname that the data service is to use? 

     

    What are the test IP addresses to use?  

    Supply one test IP address for each node in the cluster.

     

    CD-ROM Paths 

    What is the path for each of the following components that you want to install? 

    The CD-ROM path must end with the directory that contains the .cdtoc file.

     

    Solstice DiskSuite: 

    Sun Cluster (framework): 

    Sun Cluster data services (agents): 

    Patches: 

    Validation Checks 

    Do you want to run the sccheck utility to validate the cluster?

    Yes  |  No 


    Note –

    SunPlex Installer installation automatically specifies the default private-network address (172.16.0.0) and netmask (255.255.0.0). If you need to use a different address, do not use SunPlex Installer to install Sun Cluster software. Instead, follow procedures in How to Install Sun Cluster Software Packages and in How to Configure Sun Cluster Software on All Nodes (scinstall) to install and configure the cluster.

    You cannot change the private-network address and netmask after scinstall processing has finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then repeat this procedure to reinstall and configure the node with the correct information.


    See Planning the Solaris OS and Planning the Sun Cluster Environment for planning guidelines. See the Sun Cluster Data Service Planning and Administration Guide for Solaris OS for data-service planning guidelines.

  9. Start SunPlex Installer.

    1. From the administrative console or any other machine outside the cluster, launch a browser.

    2. Disable the browser's Web proxy.

      SunPlex Installer installation functionality is incompatible with Web proxies.

    3. Ensure that disk caching and memory caching is enabled.

      The disk cache and memory cache size must be greater than 0.

    4. From the browser, connect to port 3000 on a node of the cluster.


      https://node:3000
      

      The Sun Cluster Installation screen is displayed in the browser window.


      Note –

      If SunPlex Installer displays the data services installation screen instead of the Sun Cluster Installation screen, Sun Cluster framework software is already installed and configured on that node. Check that the name of the node in the URL is the correct name of the cluster node to install.


    5. If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.

  10. Log in as superuser.

  11. In the Sun Cluster Installation screen, verify that the cluster meets the listed requirements for using SunPlex Installer.

    If you meet all listed requirements, click Next to continue to the next screen.

  12. Follow the menu prompts to supply your answers from the worksheet that you completed in Step 8.

  13. Click Begin Installation to start the installation process.

    Follow these guidelines:

    • Do not close the browser window nor change the URL during the installation process.

    • If the browser displays a New Site Certification window, follow the onscreen instructions to accept the certificate.

    • If the browser prompts for login information, type the appropriate superuser ID and password for the node that you connect to.

    SunPlex Installer installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

    During installation, the screen displays brief messages about the status of the cluster installation process. When installation and configuration is complete, the browser displays the cluster monitoring and administration GUI.

    SunPlex Installer installation output is logged in the /var/cluster/spm/messages file. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  14. Verify the quorum assignments and modify those assignments, if necessary.

    For clusters with three or more nodes, the use of shared quorum devices is optional. SunPlex Installer might or might not have assigned quorum votes to any quorum devices, depending on whether appropriate shared disks were available. You can use SunPlex Manager to designate quorum devices and to reassign quorum votes in the cluster. See “Administering Quorum” in Sun Cluster System Administration Guide for Solaris OS for more information.

  15. Install Sun StorEdge QFS file system software.

    Follow the procedures for initial installation in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

  16. (Optional) SPARC: To install VERITAS File System, go to SPARC: How to Install VERITAS File System Software.

  17. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

How to Install Solaris and Sun Cluster Software (JumpStart)

This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris OS and Sun Cluster software on all cluster nodes in the same operation and establishes the cluster. You can also use this procedure to add new nodes to an existing cluster.

  1. Ensure that the hardware setup is complete and that connections are verified before you install Solaris software.

    See the Sun Cluster Hardware Administration Collection and your server and storage device documentation for details on how to set up the hardware.

  2. Ensure that your cluster configuration planning is complete.

    See How to Prepare for Cluster Software Installation for requirements and guidelines.

  3. If you use a naming service, add the following information to any naming services that clients use to access cluster services.

    • Address-to-name mappings for all public hostnames and logical addresses

    • The IP address and hostname of the JumpStart server

    See IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.

  4. If you are installing a new node to an existing cluster, add the node to the list of authorized cluster nodes.

    1. Run scsetup(1M) from another cluster node that is active.

    2. Use the scsetup utility to add the new node's name to the list of authorized cluster nodes.

    For more information, see “How to Add a Cluster Node to the Authorized Node List” in “Adding and Removing a Cluster Node” in Sun Cluster System Administration Guide for Solaris OS.

  5. Set up your JumpStart installation server.

  6. On a cluster node or another machine of the same server platform, prepare a flash archive of the Solaris OS and Sun Web Console software.

    1. Install the Solaris OS as described in How to Install Solaris Software.

    2. Insert the Sun Cluster 3.1 9/04 CD-ROM in the CD-ROM drive.

    3. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_web_console/2.1/ directory, where arch is sparc or x86.

    4. Run the setup command.


      # ./setup
      

      The setup command installs all packages to support Sun Web Console.

    5. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

    6. Create the flash archive of the installed system.


      # flar create -n name archive
      
      -n name

      Name to give the flash archive.

      archive

      File name to give the flash archive, with the full path. By convention, the file name ends in .flar.

      Follow procedures in “Creating Web Start Flash Archives” in Solaris 8 Advanced Installation Guide or “Creating Solaris Flash Archives (Tasks)” in Solaris 9 9/04 Installation Guide.

  7. Copy the flash archive to the JumpStart installation server.

  8. Ensure that the flash archive on the JumpStart installation server is NFS exported for reading by the JumpStart installation server.

    See “Solaris NFS Environment” in System Administration Guide, Volume 3 or “Managing Network File Systems (Overview)” in System Administration Guide: Resource Management and Network Services for more information about automatic file sharing. See also the share(1M) and dfstab(4) man pages.

  9. Create a directory on the JumpStart installation server to hold your copy of the Sun Cluster 3.1 9/04 CD-ROM.

    In the following example, the /export/suncluster/ directory is created for this purpose.


    # mkdir -m 755 /export/suncluster/
    

  10. Copy the Sun Cluster CD-ROM to the JumpStart installation server.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM into the CD-ROM drive on the JumpStart installation server.

      If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

    2. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/
      

    3. Copy the CD-ROM to a new directory on the JumpStart installation server.

      The scinstall command creates the new installation directory when the command copies the CD-ROM files. The following example uses the installation directory name /export/suncluster/sc31/.


      ./scinstall -a /export/suncluster/sc31/
      

    4. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

  11. Ensure that the Sun Cluster 3.1 9/04 CD-ROM image on the JumpStart installation server is NFS exported for reading by the JumpStart installation server.

    See “Solaris NFS Environment” in System Administration Guide, Volume 3 or “Managing Network File Systems (Overview)” in System Administration Guide: Resource Management and Network Services for more information about automatic file sharing. See also the share(1M) and dfstab(4) man pages.

  12. Have available the following information:

    • The Ethernet address of each cluster node

    • The following completed installation worksheet

    Table 2–7 JumpStart Installation and Configuration Worksheet

    Component 

    Description/Example 

    Enter Answers Here 

    JumpStart Directory 

    What is the name of the JumpStart directory to use? 

     

    Cluster Name 

    What is the name of the cluster that you want to establish? 

     

    Cluster Nodes 

    What are the names of the cluster nodes that are planned for the initial cluster configuration? 

     

    DES Authentication 

    Do you need to use DES authentication? 

    No  |  Yes  

    Network Address for the Cluster Transport 

    Do you want to accept the default network address (172.16.0.0)?

    Yes   |  No  

    If no, supply your own network address: 

    _____ . _____.0.0

    Do you want to accept the default netmask (255.255.0.0)?

    Yes   |  No  

    If no, supply your own netmask: 

    255.255.___ . ___

    Point-to-Point Cables 

    Does this cluster use transport junctions? 

    Yes  |  No 

    Cluster-Transport Junctions 

    If used, what are the names of the two transport junctions? 

      Defaults: switch1 and switch2


    First

    Second

    Cluster-Transport Adapters and Cables 

    First node name: 

     

    Transport adapters: 

    First

      

    Second

      

    Where does each transport adapter connect to (a transport junction or another adapter)?

      Junction defaults: switch1 and switch2


      

    For transport junctions, do you want to use the default port name? 

    Yes | No 

    Yes | No 

    If no, what is the name of the port that you want to use? 

      

    Do you want to use autodiscovery to list the available adapters for the other nodes? 

    If no, supply the following information for each additional node: 

    Yes  |  No 

    Specify for each additional node

    Node name: 

     

    Transport adapters: 

    First

      

    Second

      

    Where does each transport adapter connect to (a transport junction or another adapter)?

      Junction defaults: switch1 and switch2


      

    For transport junctions, do you want to use the default port name? 

    Yes | No 

    Yes | No 

    If no, what is the name of the port that you want to use? 

      

    Global-Devices File System 

    (specify for each node)

    Do you want to use the default name of the global-devices file system (/globaldevices)?

    Yes  |   No 

    If no, do you want to use an already-existing file system? 

    Yes  |   No 

    What is the name of the file system? 

     

    Software Patch Installation 

    Do you want scinstall to install patches for you?

    Yes  |   No 

    If yes, what is the name of the patch directory? 

     

    Do you want to use a patch list? 

    Yes  |  No 

    See Planning the Solaris OS and Planning the Sun Cluster Environment for planning guidelines.


    Note –

    You cannot change the private-network address and netmask after scinstall processing has finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then repeat this procedure to reinstall and configure the node with the correct information.


  13. From the JumpStart installation server, start the scinstall(1M) utility.

    The path /export/suncluster/sc31/ is used here as an example of the installation directory that you created. In the CD-ROM path, replace arch with sparc or x86 and replace ver with 8 (for Solaris 8) or 9 (for Solaris 9).


    # cd /export/suncluster/sc31/Solaris_arch/Product/sun_cluster/ \
    Solaris_ver/Tools/
    # ./scinstall
    

  14. Follow these guidelines to use the interactive scinstall utility:

    • Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

    • Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

  15. From the Main Menu, choose the menu item, Configure a cluster to be JumpStarted from this installation server.

    This option is used to configure custom JumpStart finish scripts. JumpStart uses these finish scripts to install the Sun Cluster software.


     *** Main Menu ***
     
        Please select from one of the following (*) options:
     
          * 1) Install a cluster or cluster node
          * 2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
          * 4) Print release information for this cluster node
     
          * ?) Help with menu options
          * q) Quit
     
        Option:  2
    


    Note –

    If the JumpStart option does not have an asterisk in front, the option is disabled. This condition indicates that JumpStart setup is not complete or that the setup has an error. To correct this condition, do the following:

    1. Quit the scinstall utility.

    2. Repeat Step 5 through Step 10 to correct JumpStart setup.

    3. Restart the scinstall utility.


  16. Follow the menu prompts to supply your answers from the worksheet that you completed in Step 12.

    The scinstall command stores your configuration information and creates the following autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.1/ directory.


    install_type    initial_install
    system_type     standalone
    partitioning    explicit
    filesys         rootdisk.s0 free /
    filesys         rootdisk.s1 750  swap
    filesys         rootdisk.s3 512  /globaldevices
    filesys         rootdisk.s7 20
    cluster         SUNWCuser        add
    package         SUNWman          add

  17. Make adjustments to the default autoscinstall.class file to configure JumpStart to install the flash archive.

    1. Change the following entries in the autoscinstall.class file. In the last new entry in the table, archive represents the location of the flash archive file.

      Existing Entry 

      New Entry 

      install_type

      initial_install

      install_type

      flash_install

      system_type

      standalone

      archive_location

      archive

    2. Remove all entries that would install a specific package.


      cluster         SUNWCuser        add
      package         SUNWman          add

  18. Set up Solaris patch directories.


    Note –

    If you specified a patch directory to the scinstall utility, patches that are located in Solaris patch directories are not installed.


    1. Create jumpstart-dir/autoscinstall.d/nodes/node/patches/ directories on the JumpStart installation server.

      Create one directory for each node in the cluster, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared patch directory.


      # mkdir jumpstart-dir/autoscinstall.d/nodes/node/patches/
      

    2. Place copies of any Solaris patches into each of these directories.

    3. Place copies of any hardware-related patches that you must install after Solaris software is installed into each of these directories.

  19. Set up files to contain the necessary hostname information locally on each node.

    1. On the JumpStart installation server, create files that are named jumpstart-dir/autoscinstall.d/nodes/node/archive/etc/inet/hosts.

      Create one file for each node, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared hosts file.

    2. Add the following entries into each file.

      • IP address and hostname of the NFS server that holds a copy of the Sun Cluster CD-ROM image. The NFS server could be the JumpStart installation server or another machine.

      • IP address and hostname of each node in the cluster.

  20. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    • If Cluster Control Panel (CCP) software is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time. Use the following command to start cconsole:


      # /opt/SUNWcluster/bin/cconsole clustername &
      

    • If you do not use the cconsole utility, connect to the consoles of each node individually.

  21. Shut down each node.


    # shutdown -g0 -y -i0
    
  22. Boot each node to start the JumpStart installation.

    • On SPARC based systems, do the following:


      ok boot net - install
      


      Note –

      Surround the dash (-) in the command with a space on each side.


    • On x86 based systems, do the following:

      1. When the BIOS information screen appears, press the Esc key.

        The Select Boot Device screen appears.

      2. On the Select Boot Device screen, choose the listed IBA that is connected to the same network as the JumpStart DHCP installation server.

        The lowest number to the right of the IBA boot choices corresponds to the lower Ethernet port number. The higher number to the right of the IBA boot choices corresponds to the higher Ethernet port number.

        The node reboots and the Device Configuration Assistant appears.

      3. On the Boot Solaris screen, choose Net.

      4. At the following prompt, choose Custom JumpStart and press Enter:


        Select the type of installation you want to perform:
        
                 1 Solaris Interactive
                 2 Custom JumpStart
        
        Enter the number of your choice followed by the <ENTER> key.
        
        If you enter anything else, or if you wait for 30 seconds,
        an interactive installation will be started.

      5. When prompted, answer the questions and follow the instructions on the screen.

    JumpStart installs the Solaris OS and Sun Cluster software on each node.


    Note –

    Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. The default file is shipped with references to the maximum number of nodes. Therefore, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time.

    You can safely ignore these messages. See How to Configure Network Time Protocol (NTP) for information on how to suppress these messages under otherwise normal cluster conditions.


    When the installation is successfully completed, each node is fully installed as a new cluster node. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

    You cannot change the private-network address and netmask after scinstall processing has finished. If you need to use a different private-network address or netmask and the node is still in installation mode, follow the procedures in How to Uninstall Sun Cluster Software to Correct Installation Problems. Then repeat this procedure to reinstall and configure the node with the correct information.

  23. If you are installing a new node to an existing cluster, create mount points on the new node for all existing cluster file systems.

    1. From another cluster node that is active, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk '{print $1}'
      

    2. On the node that you added to the cluster, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if a file-system name that is returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node that is being added to the cluster.


      Note –

      The mount points become active after you reboot the cluster in Step 26.


    3. If VERITAS Volume Manager (VxVM) is installed on any nodes that are already in the cluster, view the vxio number on each VxVM–installed node.


      # grep vxio /etc/name_to_major
      vxio NNN
      

      • Ensure that the same vxio number is used on each of the VxVM-installed nodes.

      • Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.

      • If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.

  24. (Optional) To use dynamic reconfiguration on Sun Enterprise 10000 servers, add the following entry to the /etc/system file. Add this entry on each node in the cluster.


    set kernel_cage_enable=1

    This entry becomes effective after the next system reboot. See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.

  25. x86: Set the default boot file to kadb.


    # eeprom boot-file=kadb
    

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

  26. If you performed a task that requires a cluster reboot, follow these steps to perform a reconfiguration reboot of the cluster.

    The following are some of the tasks that require a reboot:

    • Adding a new node to an existing cluster

    • Installing patches that require a node or cluster reboot

    • Making configuration changes that require a reboot to become active

    1. From one node, shut down the cluster.


      # scshutdown
      


      Note –

      Do not reboot the first-installed node of the cluster until after the cluster is shut down. Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

      Cluster nodes remain in installation mode until the first time that you run the scsetup(1M) command. You run this command during the procedure How to Perform Postinstallation Setup and Configure Quorum Devices.


    2. Reboot each node in the cluster.

      • On SPARC based systems, do the following:


        ok boot
        

      • On x86 based systems, do the following:


                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
        Boot args:
        
        Type   b [file-name] [boot-flags] <ENTER>  to boot with options
        or     i <ENTER>                           to enter boot interpreter
        or     <ENTER>                             to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b
        

    The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.

  27. Install Sun StorEdge QFS file system software.

    Follow the procedures for initial installation in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

  28. SPARC: To install VERITAS File System, go to SPARC: How to Install VERITAS File System Software.

  29. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

How to Install Sun Cluster Software on a Single-Node Cluster

Perform this task to install Sun Cluster software and establish the cluster on a single node by using the scinstall command. See the scinstall(1M) man page for details.


Note –

You cannot use SunPlex Installer or the interactive form of the scinstall utility to install Sun Cluster software on a single-node cluster.


The scinstall -iFo command establishes the following defaults during installation:

Some steps that are required for multinode cluster installations are not necessary for single-node cluster installations. When you install a single-node cluster, you do not need to perform the following steps:


Tip –

If you anticipate eventually adding a second node to your cluster, you can configure the transport interconnect during initial cluster installation. The transport interconnect is then available for later use. See the scinstall(1M) man page for details.

You can later expand a single-node cluster into a multinode cluster by following the appropriate procedures provided in How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall).


  1. Ensure that the Solaris OS is installed to support Sun Cluster software.

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

  2. Become superuser on the cluster node to install.

  3. Install Sun Web Console packages.

    These packages are required by Sun Cluster software, even if you do not use Sun Web Console.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM in the CD-ROM drive.

    2. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_web_console/2.1/ directory, where arch is sparc or x86.

    3. Run the setup command.


      # ./setup
      

      The setup command installs all packages to support Sun Web Console.

  4. On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/
    

  5. Install the Sun Cluster software and necessary patches by using the scinstall command.


    ./scinstall -iFo [-M patchdir=dirname]
    -i

    Specifies the install form of the scinstall command. The scinstall command installs Sun Cluster software and initializes the node as a new cluster.

    -F

    Establishes the node as the first node in a new cluster. All -F options can be used when installing a single-node cluster.

    -o

    Specifies that only one node is being installed for a single-node cluster. The -o option is only legal when used with both the -i and the -F forms of the command. When the -o option is used, cluster installation mode is preset to the disabled state.

    -M patchdir=dirname[[,patchlistfile=filename]]

    Specifies the path to patch information so that the specified patches can be installed by using the scinstall command. If you do not specify a patch-list file, the scinstall command installs all the patches in the directory dirname. This includes tarred, jarred, and zipped patches.

    The -M option is not required with the scinstall -iFo command. The -M option is shown in this procedure because the use of this option is the most efficient method of installing patches during a single-node cluster installation. However, you can use any method that you prefer to install patches.

  6. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    

  7. Reboot the node.

    This reboot after Sun Cluster software installation establishes the node as the cluster.

  8. (Optional) Change the cluster name.

    A single-node cluster is created with the same name as the cluster node. If you prefer, you can change the cluster name. Use either the scsetup utility or use the following scconf command:


    # /usr/cluster/bin/scconf -c -C cluster=newclustername
    

  9. Verify the installation by using the scstat command.


    # /usr/cluster/bin/scstat -n
    

    The command output should list the cluster node with the status of Online. See the scstat(1M) man page for details.

  10. Ensure that cluster installation mode is disabled.


    # /usr/cluster/bin/scconf -pv | grep "install mode"
    
  11. (Optional) SPARC: To install VERITAS File System, go to SPARC: How to Install VERITAS File System Software.

  12. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

Example—Installing Sun Cluster Software on a Single-Node Cluster

The following example shows how to use the scinstall and scstat commands to install and verify a single-node cluster. The example includes installation of all patches. See the scinstall(1M) and scstat(1M) man pages for details.


# scinstall -iFo -M patchdir=/var/cluster/patches/

Checking device to use for global devices file system ... done
** Installing SunCluster 3.1 framework **
...
Installing patches ... done

Initializing cluster name to "phys-schost-1" ... done
Initializing authentication options ... done

Setting the node ID for "phys-schost-1" ... done (id=1)

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done

Verifying that power management is NOT configured ... done

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done

Please reboot this machine.

# reboot
# scstat -n
-- Cluster Nodes --

                    Node name           Status
                    ---------           ------
  Cluster node:     phys-schost-1       Online
# scconf -pv | grep "install mode"
Cluster install mode:                   disabled

How to Configure Sun Cluster Software on Additional Cluster Nodes (scinstall)

Perform this procedure to add a new node to an existing cluster.

  1. Ensure that all necessary hardware is installed.

  2. Ensure that the Solaris OS is installed to support Sun Cluster software.

    If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Sun Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Sun Cluster software requirements.

  3. Ensure that Sun Cluster software packages are installed on the node.

    See How to Install Sun Cluster Software Packages.

  4. Complete the following configuration worksheet.

    Table 2–8 Added Node Configuration Worksheet

    Component 

    Description/Example 

    Enter Answers Here 

    Software Patch Installation 

    Do you want scinstall to install patches for you?

    Yes  |  No 

    If yes, what is the patch directory? 

     

    Do you want to use a patch list? 

    Yes  |  No 

    Sponsoring Node 

    What is the name of the sponsoring node?  

    Choose any node that is active in the cluster.

     

    Cluster Name 

    What is the name of the cluster that you want the node to join? 

     

    Check 

    Do you want to run the sccheck validation utility?

    Yes  |  No 

    Autodiscovery of Cluster Transport 

    Do you want to use autodiscovery to configure the cluster transport? 

    If no, supply the following additional information: 

    Yes  |  No 

    Point-to-Point Cables 

    Does the node that you are adding to the cluster make this a two-node cluster? 

    Yes  |  No 

    Does the cluster use transport junctions? 

    Yes  |  No 

    Cluster–Transport Junctions 

    If used, what are the names of the two transport junctions? 

      Defaults: switch1 and switch2


    First

      

    Second

      

    Cluster-Transport Adapters and Cables 

    What are the names of the two transport adapters? 

    First

      

    Second

      

    Where does each transport adapter connect to (a transport junction or another adapter)?

      Junction defaults: switch1 and switch2


      

    For transport junctions, do you want to use the default port name? 

    Yes | No 

    Yes | No 

    If no, what is the name of the port that you want to use? 

      

    Global-Devices File System 

    What is the name of the global-devices file system? 

      Default: /globaldevices


     

    Automatic Reboot 

    Do you want scinstall to automatically reboot the node after installation?

    Yes  |  No 

    See Planning the Solaris OS and Planning the Sun Cluster Environment for planning guidelines.

  5. If you are adding this node to a single-node cluster, determine whether two cluster interconnects already exist.

    You must have at least two cables or two adapters configured before you can add a node.


    # scconf -p | grep cable
    # scconf -p | grep adapter
    
    • If the output shows configuration information for two cables or for two adapters, proceed to Step 6.

    • If the output shows no configuration information for either cables or adapters, or shows configuration information for only one cable or adapter, configure new cluster interconnects.

    1. On the existing cluster node, start the scsetup(1M) utility.


      # scsetup
      

    2. Choose the menu item, Cluster interconnect.

    3. Choose the menu item, Add a transport cable.

      Follow the instructions to specify the name of the node to add to the cluster, the name of a transport adapter, and whether to use a transport junction.

    4. If necessary, repeat Step c to configure a second cluster interconnect.

      When finished, quit the scsetup utility.

    5. Verify that the cluster now has two cluster interconnects configured.


      # scconf -p | grep cable
      # scconf -p | grep adapter
      

      The command output should show configuration information for at least two cluster interconnects.

  6. If you are adding this node to an existing cluster, add the new node to the cluster authorized–nodes list.

    1. On any active cluster member, start the scsetup(1M) utility.


      # scsetup
      

      The Main Menu is displayed.

    2. Choose the menu item, New nodes.

    3. Choose the menu item, Specify the name of a machine which may add itself.

    4. Follow the prompts to add the node's name to the list of recognized machines.

      The scsetup utility prints the message Command completed successfully if the task completes without error.

    5. Quit the scsetup utility.

  7. Become superuser on the cluster node to configure.

  8. Install Sun Web Console packages.

    These packages are required by Sun Cluster software, even if you do not use Sun Web Console.

    1. Insert the Sun Cluster 3.1 9/04 CD-ROM in the CD-ROM drive.

    2. Change to the /cdrom/cdrom0/Solaris_arch/Product/sun_web_console/2.1/ directory, where arch is sparc or x86.

    3. Run the setup command.


      # ./setup
      

      The setup command installs all packages to support Sun Web Console.

  9. Install additional packages if you intend to use any of the following features.

    • Remote Shared Memory Application Programming Interface (RSMAPI)

    • SCI-PCI adapters for the interconnect transport

    • RSMRDT drivers


    Note –

    Use of the RSMRDT driver is restricted to clusters that run an Oracle9i release 2 SCI configuration with RSM enabled. Refer to Oracle9i release 2 user documentation for detailed installation and configuration instructions.


    1. Determine which packages you must install.

      The following table lists the Sun Cluster 3.1 9/04 packages that each feature requires and the order in which you must install each group of packages. The scinstall utility does not automatically install these packages.

      Feature 

      Additional Sun Cluster 3.1 9/04 Packages to Install  

      RSMAPI 

      SUNWscrif

      SCI-PCI adapters 

      SUNWsci SUNWscid SUNWscidx

      RSMRDT drivers 

      SUNWscrdt

    2. Ensure that any dependency Solaris packages are already installed.

      See Step 8 in How to Install Solaris Software.

    3. On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
      

    4. Install the additional packages.


      # pkgadd -d . packages
      

    5. If you are adding a node to a single-node cluster, repeat these steps to add the same packages to the original cluster node.

  10. On the Sun Cluster 3.1 9/04 CD-ROM, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/ directory, where arch is sparc or x86 and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


    # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Tools/
    

  11. Start the scinstall utility.


    # /usr/cluster/bin/scinstall
    

  12. Follow these guidelines to use the interactive scinstall utility:

    • Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

    • Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

  13. From the Main Menu, choose the menu item, Install a cluster or cluster node.


      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Install a cluster or cluster node
            2) Configure a cluster to be JumpStarted from this install server
            3) Add support for new data services to this cluster node
          * 4) Print release information for this cluster node
    
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    

  14. From the Install Menu, choose the menu item, Add this machine as a node in an existing cluster.

  15. Follow the menu prompts to supply your answers from the worksheet that you completed in Step 4.

    The scinstall utility configures the node and boots the node into the cluster.

  16. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    

  17. Repeat this procedure on any other node to add to the cluster until all additional nodes are fully configured.

  18. From an active cluster member, prevent any other nodes from joining the cluster.


    # /usr/cluster/bin/scconf -a -T node=.
    
    -a

    Add

    -T

    Specifies authentication options

    node=.

    Specifies the node name of dot (.) to add to the authentication list, to prevent any other node from adding itself to the cluster

    Alternately, you can use the scsetup(1M) utility. See “How to Add a Cluster Node to the Authorized Node List” in “Adding and Removing a Cluster Node” in Sun Cluster System Administration Guide for Solaris OS for procedures.

  19. Update the quorum vote count.

    When you increase or decrease the number of node attachments to a quorum device, the cluster does not automatically recalculate the quorum vote count. This step reestablishes the correct quorum vote.

    Use the scsetup utility to remove each quorum device and then add it back into the configuration. Do this for one quorum device at a time.

    If the cluster has only one quorum device, configure a second quorum device before you remove and readd the original quorum device. Then remove the second quorum device to return the cluster to its original configuration.

  20. Install Sun StorEdge QFS file system software.

    Follow the procedures for initial installation in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

  21. (Optional) SPARC: To install VERITAS File System, go to SPARC: How to Install VERITAS File System Software.

  22. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

Example – Configuring Sun Cluster Software on an Additional Node

The following example shows the scinstall command executed and the messages that the utility logs as scinstall completes configuration tasks on the node phys-schost-3. The sponsoring node is phys-schost-1.


 >>> Confirmation <<<
  
    Your responses indicate the following options to scinstall:
  
      scinstall -ik \
           -C sc-cluster \
           -N phys-schost-1 \
           -A trtype=dlpi,name=hme1 -A trtype=dlpi,name=hme3 \
           -m endpoint=:hme1,endpoint=switch1 \
           -m endpoint=:hme3,endpoint=switch2
  
    Are these the options you want to use (yes/no) [yes]?
  
    Do you want to continue with the install (yes/no) [yes]?
  
Checking device to use for global devices file system ... done
  
Adding node "phys-schost-3" to the cluster configuration ... done
Adding adapter "hme1" to the cluster configuration ... done
Adding adapter "hme3" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done
  
Copying the config from "phys-schost-1" ... done
Setting the node ID for "phys-schost-3" ... done (id=3)
 
Verifying the major number for the "did" driver with "phys-schost-1" ...done
  
Checking for global devices global file system ... done
Updating vfstab ... done
  
Verifying that NTP is configured ... done
Installing a default NTP configuration ... done
Please complete the NTP configuration after scinstall has finished.
  
Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done
  
Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done
  
Verifying that power management is NOT configured ... done
Unconfiguring power management ... done 
/etc/power.conf has been renamed to /etc/power.conf.61501001054 
Power management is incompatible with the HA goals of the cluster.
 Please do not attempt to re-configure power management.
  
Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ...done
Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter. 
Having a cluster node act as a router is not supported by Sun Cluster. 
Please do not re-enable network routing.
  
Log file - /var/cluster/logs/install/scinstall.log.9853
  
Rebooting ...

SPARC: How to Install VERITAS File System Software

Perform this procedure on each node of the cluster.

  1. Follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster.

  2. Install any Sun Cluster patches that are required to support VxFS.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  3. In the /etc/system file on each node, set the following values.


    set rpcmod:svc_default_stksize=0x8000
    set lwp_default_stksize=0x6000

    • Sun Cluster software requires a minimum rpcmod:svc_default_stksize setting of 0x8000. Because VxFS installation sets the value of the rpcmod:svc_default_stksize variable to 0x4000, you must manually set the value to 0x8000 after VxFS installation is complete.

    • You must set the lwp_default_stksize variable in the /etc/system file to override the VxFS default value of 0x4000.

  4. Set up the name-service look-up order.

    Go to How to Configure the Name-Service Switch.

How to Configure the Name-Service Switch

Perform this task on each node in the cluster.

  1. Become superuser on the cluster node.

  2. Edit the /etc/nsswitch.conf file.

    1. Verify that cluster is the first source look-up for the hosts and netmasks database entries.

      This order is necessary for Sun Cluster software to function properly. The scinstall(1M) command adds cluster to these entries during installation.

    2. (Optional) To increase availability to data services if the naming service becomes unavailable, change the look-up order of the following entries:

      • For the hosts and netmasks database entries, follow cluster with files.

      • For Sun Cluster HA for NFS, also insert [SUCCESS=return] after cluster files and before name services.


        hosts:      cluster files [SUCCESS=return] nis

        This look-up order ensures that, if the node resolves a name locally, the node does not contact the listed name service(s). Instead, the node returns success immediately.

      • For all other database entries, place files first in the look-up order.

      • If the [NOTFOUND=return] criterion becomes the last item of an entry after you modify the lookup order, the criterion is no longer necessary. You can either delete the [NOTFOUND=return] criterion from the entry or leave the criterion in the entry. A [NOTFOUND=return] criterion at the end of an entry is ignored.

    3. Make any other changes that are required by specific data services.

      See each manual for the data services that you installed.

    The following example shows partial contents of an /etc/nsswitch.conf file. The look-up order for the hosts and netmasks database entries is first cluster, then files. The look-up order for other entries begins with files. The [NOTFOUND=return] criterion is removed from the entries.


    # vi /etc/nsswitch.conf
    …
    passwd:     files nis
    group:      files nis
    …
    hosts:      cluster files nis
    …
    netmasks:   cluster files nis
    …

    See the nsswitch.conf(4) man page for more information about nsswitch.conf file entries.

  3. Set up your root user's environment.

    Go to How to Set Up the Root Environment.

How to Set Up the Root Environment


Note –

In a Sun Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell. The files must verify this before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See “Customizing a User's Work Environment” in System Administration Guide, Volume 1 (Solaris 8) or “Customizing a User's Work Environment” in System Administration Guide: Basic Administration (Solaris 9) for more information.


Perform this procedure on each node in the cluster.

  1. Become superuser on a cluster node.

  2. Modify PATH and MANPATH entries in the .cshrc or .profile file.

    1. Set the PATH to include /usr/sbin/ and /usr/cluster/bin/.

    2. Set the MANPATH to include /usr/cluster/man/.

    See your volume manager documentation and other application documentation for additional file paths to set.

  3. (Optional) For ease of administration, set the same root password on each node, if you have not already done so.

  4. Install Sun Cluster 3.1 9/04 data-service software packages.

How to Install Data-Service Software Packages (installer)

To install data services from the Sun Cluster 3.1 9/04 release, you can use the installer program to install the packages. To install data services from the Sun Cluster 3.1 release or earlier, follow the procedures in How to Install Data-Service Software Packages (scinstall).

You can run the installer program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the installer program, see the installer(1M) man page.

Perform this procedure on each node in the cluster on which you want to run a data service.

  1. Become superuser on the cluster node.

  2. (Optional) If you intend to use the installer program with a GUI, ensure that the DISPLAY environment variable is set.

  3. Load the Sun Cluster 3.1 9/04 Agents CD-ROM into the CD-ROM drive.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  4. Change to the directory where the CD-ROM is mounted.


    # cd /cdrom/cdrom0/
    

  5. Start the installer program.


    # ./installer
    
  6. When you are prompted, select the type of installation.

    See the Sun Cluster Release Notes for a listing of the locales that are available for each data service.

    • To install all data services on the CD-ROM, select Typical.

    • To install only a subset of the data services on the CD-ROM, select Custom.

  7. When you are prompted, select the locale to install.

    • To install only the C locale, select Typical.

    • To install other locales, select Custom.

  8. Follow instructions on the screen to install the data-service packages on the node.

    After the installation is finished, the installer program provides an installation summary. This summary enables you to view logs that the program created during the installation. These logs are located in the /var/sadm/install/logs/ directory.

  9. Quit the installer program.

  10. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    

  11. Install any Sun Cluster data-service patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:

    1. From one node, shut down the cluster by using the scshutdown(1M) command.

    2. Reboot each node in the cluster.


    Note –

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.

    Cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Perform Postinstallation Setup and Configure Quorum Devices.


  12. Determine your next step.

How to Install Data-Service Software Packages (scinstall)


Note –

You do not need to perform this procedure if you used SunPlex Installer to install Sun Cluster HA for NFS or Sun Cluster HA for Apache or both and if you do not intend to install any other data services. Instead, go to How to Perform Postinstallation Setup and Configure Quorum Devices.


Perform this task on each cluster node to install data services. If you install data services from the earlier Sun Cluster 3.1 10/03 release or compatible, you can alternatively use the installer program to install the packages. See How to Install Data-Service Software Packages (installer).

  1. Become superuser on the cluster node.

  2. Load the Sun Cluster 3.1 9/04 Agents CD-ROM into the CD-ROM drive on the node.

    If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

  3. Change to the directory where the CD-ROM is mounted.


    # cd /cdrom/cdrom0/
    

  4. Start the scinstall(1M) utility.


    # scinstall
    

  5. Follow these guidelines to use the interactive scinstall utility:

    • Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.

    • Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.

  6. From the Main Menu, choose the menu item, Add support for new data services to this cluster node.

  7. Follow the prompts to select the data services to install.

    You must install the same set of data-service packages on each node. This requirement applies even if a node is not expected to host resources for an installed data service.

  8. After the data services are installed, quit the scinstall utility.

  9. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


    # eject cdrom
    

  10. Install any Sun Cluster data-service patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

    You do not have to reboot after you install Sun Cluster data-service patches unless a reboot is specified by the patch special instructions. If a patch instruction requires that you reboot, perform the following steps:

    1. From one node, shut down the cluster by using the scshutdown(1M) command.

    2. Reboot each node in the cluster.


    Note –

    Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established multiple-node cluster which is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. This inability to obtain quorum causes the entire cluster to shut down.

    Cluster nodes remain in installation mode until you run the scsetup(1M) command, during the procedure How to Perform Postinstallation Setup and Configure Quorum Devices.


  11. Determine your next step.

How to Perform Postinstallation Setup and Configure Quorum Devices


Note –

You do not need to configure quorum devices in the following circumstances:

Instead, proceed to How to Verify the Quorum Configuration and Installation Mode.


Perform this procedure one time only, after the cluster is fully formed. Use this procedure to assign quorum votes and then to remove the cluster from installation mode.

  1. From one node, verify that all nodes have joined the cluster.

    Run the scstat(1M) command to display a list of the cluster nodes. You do not need to be logged in as superuser to run this command.


    % scstat -n
    

    Output resembles the following.


    -- Cluster Nodes --
                               Node name      Status
                               ---------      ------
      Cluster node:            phys-schost-1  Online
      Cluster node:            phys-schost-2  Online

  2. On each node, verify device connectivity to the cluster nodes.

    Run the scdidadm(1M) command to display a list of all the devices that the system checks. You do not need to be logged in as superuser to run this command.


    % scdidadm -L
    

    The list on each node should be the same. Output resembles the following:


    1       phys-schost-1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
    2       phys-schost-1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
    2       phys-schost-2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d2
    3       phys-schost-1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
    3       phys-schost-2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d3
    …

  3. If you are adding a new node to an existing cluster, determine whether you need to update the quorum configuration to accommodate your cluster's new configuration.

    If this is a new cluster, proceed to Step 4.

    1. See “Quorum Devices” in Sun Cluster Overview for Solaris OS and Sun Cluster Concepts Guide for Solaris OS for information about quorum.

    2. If you need to change the quorum configuration, follow procedures in “Administering Quorum” in Sun Cluster System Administration Guide for Solaris OS.

    3. When the modified quorum configuration is satisfactory, go to How to Verify the Quorum Configuration and Installation Mode.

  4. Determine the global device-ID name of each shared disk that you are configuring as a quorum device.


    Note –

    Any shared disk that you choose must be qualified for use as a quorum device. See Quorum Devices for further information about choosing quorum devices.


    Use the scdidadm output from Step 2 to identify the device–ID name of each shared disk that you are configuring as a quorum device. For example, the output in Step 2 shows that global device d2 is shared by phys-schost-1 and phys-schost-2. You use this information in Step 7.

  5. Become superuser on one node of the cluster.

  6. Start the scsetup(1M) utility.


    # scsetup
    

    The Initial Cluster Setup screen is displayed.


    Note –

    If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 9.


  7. Answer the prompt Do you want to add any quorum disks?.

    • If your cluster is a two-node cluster, you must configure at least one shared quorum device. Type Yes and follow the prompts to configure one or more quorum devices.

    • If your cluster has three or more nodes, quorum device configuration is optional. Type No if you do not want to configure additional quorum devices or type Yes to configure more quorum devices.


    Tip –

    If you later increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote by removing each quorum device and then add it back into the configuration, one quorum device at a time.

    For a two-node cluster, temporarily add a new quorum device before you remove and add back the original quorum device. Then remove the temporary quorum device.

    See the procedure “How to Modify a Quorum Device Node List” in “Administering Quorum” in Sun Cluster System Administration Guide for Solaris OS.


  8. At the prompt Is it okay to reset "installmode"?, type Yes.

    After the scsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.


    Tip –

    If the quorum setup process is interrupted or fails to be completed successfully, rerun scsetup.


  9. Quit the scsetup utility.

  10. Verify the quorum configuration and that installation mode is disabled.

    Go to How to Verify the Quorum Configuration and Installation Mode.

How to Verify the Quorum Configuration and Installation Mode

Perform this procedure to verify that quorum configuration completed successfully and that cluster installation mode is disabled.

  1. From any node, verify the device and node quorum configurations.


    % scstat -q
    

  2. From any node, verify that cluster installation mode is disabled.

    You do not need to be superuser to run this command.


    % scconf -p | grep "install mode"
    Cluster install mode:                disabled

    Cluster installation is complete. You are now ready to install volume management software and to configure the cluster.

Task Map: Configuring the Cluster

The following table lists the tasks to perform to configure your cluster. Before you start to perform these tasks, ensure that you completed the following tasks:

Table 2–9 Task Map: Configuring the Cluster

Task 

Instructions 

Create and mount cluster file systems. 

How to Create Cluster File Systems

Configure IP Network Multipathing groups. 

How to Configure Internet Protocol (IP) Network Multipathing Groups

(Optional) Change a node's private hostname.

How to Change Private Hostnames

Create or modify the NTP configuration file. 

How to Configure Network Time Protocol (NTP)

(Optional) SPARC: Install the Sun Cluster module to Sun Management Center software.

SPARC: Installing the Sun Cluster Module for Sun Management Center

Sun Management Center documentation 

Install third-party applications and configure the applications, data services, and resource groups. 

Sun Cluster Data Service Planning and Administration Guide for Solaris OS

Third-party application documentation 

Configuring the Cluster

This section provides information and procedures to configure the software that you installed on the cluster.

How to Create Cluster File Systems

Perform this procedure to create a cluster file system. Unlike a local file system, a cluster file system is accessible from any node in the cluster. If you used SunPlex Installer to install data services, SunPlex Installer might have already created one or more cluster file systems.


Caution – Caution –

Any data on the disks is destroyed when you create a file system. Be sure that you specify the correct disk device name. If you specify the wrong device name, you might erase data that you did not intend to delete.


Perform this procedure for each cluster file system that you want to create.

  1. Ensure that volume-manager software is installed and configured.

    For volume-manager installation procedures, see Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software or SPARC: Installing and Configuring VxVM Software.

  2. Become superuser on any node in the cluster.


    Tip –

    For faster file-system creation, become superuser on the current primary of the global device for which you create a file system.


  3. Create a file system.

    • For a UFS file system, use the newfs(1M) command.


      # newfs raw-disk-device
      

      The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.

      Volume Manager 

      Sample Disk Device Name 

      Description 

      Solstice DiskSuite or Solaris Volume Manager 

      /dev/md/nfs/rdsk/d1

      Raw disk device d1 within the nfs disk set

      SPARC: VERITAS Volume Manager 

      /dev/vx/rdsk/oradg/vol01

      Raw disk device vol01 within the oradg disk group

      None 

      /dev/global/rdsk/d1s3

      Raw disk device d1s3

    • For a Sun StorEdge QFS file system, follow the procedures for defining the configuration in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.

    • SPARC: For a VERITAS File System (VxFS) file system, follow the procedures that are provided in your VxFS documentation.

  4. On each node in the cluster, create a mount-point directory for the cluster file system.

    A mount point is required on each node, even if the cluster file system is not accessed on that node.


    Tip –

    For ease of administration, create the mount point in the /global/device-group/ directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.



    # mkdir -p /global/device-group/mountpoint/
    
    device-group

    Name of the directory that corresponds to the name of the device group that contains the device

    mountpoint

    Name of the directory on which to mount the cluster file system

  5. On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.

    See the vfstab(4) man page for details.

    1. In each entry, specify the required mount options for the type of file system that you use. See Table 2–10, Table 2–11, or Table 2–12 for the list of required mount options.


      Note –

      Do not use the logging mount option for Solstice DiskSuite trans metadevices or Solaris Volume Manager transactional volumes. Trans metadevices and transactional volumes provide their own logging.

      In addition, Solaris Volume Manager transactional-volume logging (formerly Solstice DiskSuite trans-metadevice logging) is scheduled to be removed from the Solaris OS in an upcoming Solaris release. Solaris UFS logging provides the same capabilities but superior performance, as well as lower system administration requirements and overhead.


      Table 2–10 Mount Options for UFS Cluster File Systems

      Mount Option 

      Description 

      global

      Required. This option makes the file system globally visible to all nodes in the cluster.

      logging

      Required. This option enables logging.

      forcedirectio

      Required for cluster file systems that will host Oracle Real Application Clusters RDBMS data files, log files, and control files.


      Note –

      Oracle Real Application Clusters is supported for use only in SPARC based clusters.


      onerror=panic

      Required. You do not have to explicitly specify the onerror=panic mount option in the /etc/vfstab file. This mount option is already the default value if no other onerror mount option is specified.


      Note –

      Only the onerror=panic mount option is supported by Sun Cluster software. Do not use the onerror=umount or onerror=lock mount options. These mount options are not supported on cluster file systems for the following reasons:

      • Use of the onerror=umount or onerror=lock mount option might cause the cluster file system to lock or become inaccessible. This condition might occur if the cluster file system experiences file corruption.

      • The onerror=umount or onerror=lock mount option might cause the cluster file system to become unmountable. This condition might thereby cause applications that use the cluster file system to hang or prevent the applications from being killed.

      A node might require rebooting to recover from these states.


      syncdir

      Optional. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior for the write() system call. If a write() succeeds, then this mount option ensures that sufficient space is on the disk.

      If you do not specify syncdir, the same behavior occurs that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition (ENOSPC) until you close a file.

      You see ENOSPC on close only during a very short time after a failover. With syncdir, as with POSIX behavior, the out-of-space condition would be discovered before the close.

      See the mount_ufs(1M) man page for more information about UFS mount options.

      Table 2–11 SPARC: Mount Parameters for Sun StorEdge QFS Shared File Systems

      Mount Parameter 

      Description 

      shared

      Required. This option specifies that this is a shared file system, therefore globally visible to all nodes in the cluster.


      Caution – Caution –

      Ensure that settings in the /etc/vfstab file do not conflict with settings in the /etc/opt/SUNWsamfs/samfs.cmd file. Settings in the /etc/vfstab file override settings in the /etc/opt/SUNWsamfs/samfs.cmd file.


      Certain data services such as Sun Cluster Support for Oracle Real Application Clusters have additional requirements and guidelines for QFS mount parameters. See your data service manual for any additional requirements.

      See the mount_samfs(1M) man page for more information about QFS mount parameters.


      Note –

      Logging is not enabled by an /etc/vfstab mount parameter. To enable logging, follow procedures in the Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.


      Table 2–12 SPARC: Mount Options for VxFS Cluster File Systems

      Mount Option 

      Description 

      global

      Required. This option makes the file system globally visible to all nodes in the cluster.

      log

      Required. This option enables logging.

      See the VxFS mount_vxfs man page and “Administering Cluster File Systems Overview” in Sun Cluster System Administration Guide for Solaris OS for more information about VxFS mount options.

    2. To automatically mount the cluster file system, set the mount at boot field to yes.

    3. Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.

    4. Ensure that the entries in each node's /etc/vfstab file list devices in the same order.

    5. Check the boot order dependencies of the file systems.

      For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle/, and phys-schost-2 mounts disk device d1 on /global/oracle/logs/. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs/ only after phys-schost-1 boots and mounts /global/oracle/.

  6. On any node in the cluster, run the sccheck(1M) utility.

    The sccheck utility verifies that the mount points exist. The utility also verifies that /etc/vfstab file entries are correct on all nodes of the cluster.


    # sccheck
    

    If no errors occur, nothing is returned.

  7. Mount the cluster file system.


    # mount /global/device-group/mountpoint/
    

    • For UFS and QFS, mount the cluster file system from any node in the cluster.

    • SPARC: For VxFS, mount the cluster file system from the current master of device-group to ensure that the file system mounts successfully. In addition, unmount a VxFS file system from the current master of device-group to ensure that the file system unmounts successfully.


      Note –

      To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.


  8. On each node of the cluster, verify that the cluster file system is mounted.

    You can use either the df(1M) or mount(1M) command to list mounted file systems.

  9. Configure IP Network Multipathing groups.

    Go to How to Configure Internet Protocol (IP) Network Multipathing Groups.

Example – Creating a Cluster File System

The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.


# newfs /dev/md/oracle/rdsk/d1
…
 
(on each node)
# mkdir -p /global/oracle/d1
# vi /etc/vfstab
#device           device        mount   FS      fsck    mount   mount
#to mount         to fsck       point   type   ; pass    at boot options
#                     
/dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging
(save and exit)
 
(on one node)
# sccheck
# mount /global/oracle/d1
# mount
…
/global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/largefiles
on Sun Oct 3 08:56:16 2000

How to Configure Internet Protocol (IP) Network Multipathing Groups

Perform this task on each node of the cluster. If you used SunPlex Installer to install Sun Cluster HA for Apache or Sun Cluster HA for NFS, SunPlex Installer configured IP Network Multipathing groups for the public-network adapters those data services use. You must configure IP Network Multipathing groups for the remaining public-network adapters.


Note –

All public-network adapters must belong to an IP Network Multipathing group.


  1. Have available your completed Public Networks Worksheet.

  2. Configure IP Network Multipathing groups.

    Perform procedures for IPv4 addresses in “Deploying Network Multipathing” in IP Network Multipathing Administration Guide (Solaris 8) or “Administering Network Multipathing (Task)” in System Administration Guide: IP Services (Solaris 9).

    Follow these additional requirements to configure IP Network Multipathing groups in a Sun Cluster configuration:

    • Each public network adapter must belong to a multipathing group.

    • For multipathing groups that contain two or more adapters, you must configure a test IP address for each adapter in the group. If a multipathing group contains only one adapter, you do not need to configure a test IP address.

    • Test IP addresses for all adapters in the same multipathing group must belong to a single IP subnet.

    • Test IP addresses must not be used by normal applications because the test IP addresses are not highly available.

    • In the /etc/default/mpathd file, the value of TRACK_INTERFACES_ONLY_WITH_GROUPS must be yes.

    • The name of a multipathing group has no requirements or restrictions.

  3. If you want to change any private hostnames, go to How to Change Private Hostnames.

  4. If you did not install your own /etc/inet/ntp.conf file before you installed Sun Cluster software, install or create the NTP configuration file.

    Go to How to Configure Network Time Protocol (NTP).

  5. If you are using Sun Cluster on a SPARC based system and you want to use Sun Management Center to monitor the cluster, install the Sun Cluster module for Sun Management Center.

    Go to SPARC: Installing the Sun Cluster Module for Sun Management Center.

  6. Install third-party applications, register resource types, set up resource groups, and configure data services.

    Follow procedures in the Sun Cluster Data Services Planning and Administration Guide for Solaris OS as well as in the documentation that is supplied with your application software.

How to Change Private Hostnames

Perform this task if you do not want to use the default private hostnames, clusternodenodeid-priv, that are assigned during Sun Cluster software installation.


Note –

Do not perform this procedure after applications and data services have been configured and have been started. Otherwise, an application or data service might continue to use the old private hostname after the hostname is renamed, which would cause hostname conflicts. If any applications or data services are running, stop them before you perform this procedure.


Perform this procedure on one active node of the cluster.

  1. Become superuser on a node in the cluster.

  2. Start the scsetup(1M) utility.


    # scsetup
    

  3. From the Main Menu, choose the menu item, Private hostnames.

  4. From the Private Hostname Menu, choose the menu item, Change a private hostname.

  5. Follow the prompts to change the private hostname.

    Repeat for each private hostname to change.

  6. Verify the new private hostnames.


    # scconf -pv | grep "private hostname"
    (phys-schost-1) Node private hostname:      phys-schost-1-priv
    (phys-schost-3) Node private hostname:      phys-schost-3-priv
    (phys-schost-2) Node private hostname:      phys-schost-2-priv

  7. If you did not install your own /etc/inet/ntp.conf file before you installed Sun Cluster software, install or create the NTP configuration file.

    Go to How to Configure Network Time Protocol (NTP).

  8. (Optional) SPARC: Configure Sun Management Center to monitor the cluster.

    Go to SPARC: Installing the Sun Cluster Module for Sun Management Center.

  9. Install third-party applications, register resource types, set up resource groups, and configure data services.

    See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

How to Configure Network Time Protocol (NTP)

Perform this task to create or modify the NTP configuration file after you install Sun Cluster software. You must also modify the NTP configuration file when you add a node to an existing cluster or when you change the private hostname of a node in the cluster. If you added a node to a single-node cluster, you must ensure that the NTP configuration file that you use is copied to the original cluster node.

The primary requirement when you configure NTP, or any time synchronization facility within the cluster, is that all cluster nodes must be synchronized to the same time. Consider accuracy of time on individual nodes to be of secondary importance to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs if this basic requirement for synchronization is met.

See the Sun Cluster Concepts Guide for Solaris OS for further information about cluster time. See the /etc/inet/ntp.cluster template file for additional guidelines on how to configure NTP for a Sun Cluster configuration.

  1. If you installed your own /etc/inet/ntp.conf file before you installed Sun Cluster software, you do not need to modify your ntp.conf file.

    Skip to Step 8.

  2. Become superuser on a cluster node.

  3. If you have your own file, copy your file to each node of the cluster.

  4. If you do not have your own /etc/inet/ntp.conf file to install, use the /etc/inet/ntp.conf.cluster file as your NTP configuration file.


    Note –

    Do not rename the ntp.conf.cluster file as ntp.conf.


    If the /etc/inet/ntp.conf.cluster file does not exist on the node, you might have an /etc/inet/ntp.conf file from an earlier installation of Sun Cluster software. Sun Cluster software creates the /etc/inet/ntp.conf.cluster file as the NTP configuration file if an /etc/inet/ntp.conf file is not already present on the node. If so, perform the following edits instead on that ntp.conf file.

    1. Use your preferred text editor to open the /etc/inet/ntp.conf.cluster file on one node of the cluster for editing.

    2. Ensure that an entry exists for the private hostname of each cluster node.

      If you changed any node's private hostname, ensure that the NTP configuration file contains the new private hostname.

    3. Remove any unused private hostnames.

      The ntp.conf.cluster file might contain nonexistent private hostnames. When a node is rebooted, the system generates error messages as the node attempts to contact those nonexistent private hostnames.

    4. If necessary, make other modifications to meet your NTP requirements.

  5. Copy the NTP configuration file to all nodes in the cluster.

    The contents of the NTP configuration file must be identical on all cluster nodes.

  6. Stop the NTP daemon on each node.

    Wait for the stop command to complete successfully on each node before you proceed to Step 7.


    # /etc/init.d/xntpd stop
    

  7. Restart the NTP daemon on each node.

    • If you use the ntp.conf.cluster file, run the following command:


      # /etc/init.d/xntpd.cluster start
      

      The xntpd.cluster startup script first looks for the /etc/inet/ntp.conf file. If that file exists, the script exits immediately without starting the NTP daemon. If the ntp.conf file does not exist but the ntp.conf.cluster file does exist, the script starts the NTP daemon. In this case, the script uses the ntp.conf.cluster file as the NTP configuration file.

    • If you use the ntp.conf file, run the following command:


      # /etc/init.d/xntpd start
      
  8. (Optional) SPARC: Configure Sun Management Center to monitor the cluster.

    Go to SPARC: Installing the Sun Cluster Module for Sun Management Center.

  9. Install third-party applications, register resource types, set up resource groups, and configure data services.

    See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

SPARC: Task Map: Installing the Sun Cluster Module for Sun Management Center

The Sun Cluster module for Sun Management Center enables you to use Sun Management Center to monitor the cluster. The following table lists the tasks to perform to install the Sun Cluster–module software for Sun Management Center.

Table 2–13 Task Map: Installing the Sun Cluster Module for Sun Management Center

Task 

Instructions 

Install Sun Management Center server, help-server, agent, and console packages. 

Sun Management Center documentation 

SPARC: Installation Requirements for Sun Cluster Monitoring

Install Sun Cluster–module packages. 

SPARC: How to Install the Sun Cluster Module for Sun Management Center

Start Sun Management Center server, console, and agent processes. 

SPARC: How to Start Sun Management Center

Add each cluster node as a Sun Management Center agent host object. 

SPARC: How to Add a Cluster Node as a Sun Management Center Agent Host Object

Load the Sun Cluster module to begin to monitor the cluster. 

SPARC: How to Load the Sun Cluster Module

SPARC: Installing the Sun Cluster Module for Sun Management Center

This section provides information and procedures to install the Sun Cluster module to Sun Management Center software.

SPARC: Installation Requirements for Sun Cluster Monitoring

The Sun Cluster module for Sun Management Center is used to monitor a Sun Cluster configuration. Perform the following tasks before you install the Sun Cluster module packages.

SPARC: How to Install the Sun Cluster Module for Sun Management Center

Perform this procedure to install the Sun Cluster–module server and help–server packages.


Note –

The Sun Cluster–module agent packages, SUNWscsal and SUNWscsam, are already added to cluster nodes during Sun Cluster software installation.


  1. Ensure that all Sun Management Center core packages are installed on the appropriate machines.

    This step includes installing Sun Management Center agent packages on each cluster node. See your Sun Management Center documentation for installation instructions.

  2. On the server machine, install the Sun Cluster–module server package SUNWscssv.

    1. Become superuser.

    2. Insert the Sun Cluster 3.1 9/04 CD-ROM into the CD-ROM drive. If the volume management daemon vold(1M) is running and is configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0/ directory.

    3. Change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/ directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


      # cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
      

    4. Install the Sun Cluster–module server package.


      # pkgadd -d . SUNWscssv
      

    5. Change to a directory that does not reside on the CD-ROM and eject the CD-ROM.


      # eject cdrom
      

  3. On the help-server machine, install the Sun Cluster–module help–server package SUNWscshl.

    Use the same procedure as in the previous step.

  4. Install any Sun Cluster–module patches.

    See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.

  5. Start Sun Management Center.

    Go to SPARC: How to Start Sun Management Center.

SPARC: How to Start Sun Management Center

Perform this procedure to start the Sun Management Center server, agent, and console processes.

  1. As superuser, on the Sun Management Center server machine, start the Sun Management Center server process.


    # /opt/SUNWsymon/sbin/es-start -S
    

  2. As superuser, on each Sun Management Center agent machine (cluster node), start the Sun Management Center agent process.


    # /opt/SUNWsymon/sbin/es-start -a
    

  3. On each Sun Management Center agent machine (cluster node), ensure that the scsymon_srv daemon is running.


    # ps -ef | grep scsymon_srv
    

    If any cluster node is not already running the scsymon_srv daemon, start the daemon on that node.


    # /usr/cluster/lib/scsymon/scsymon_srv
    

  4. On the Sun Management Center console machine (administrative console), start the Sun Management Center console.

    You do not need to be superuser to start the console process.


    % /opt/SUNWsymon/sbin/es-start -c
    

  5. Type your login name, password, and server hostname, then click Login.

  6. Add cluster nodes as monitored host objects.

    Go to SPARC: How to Add a Cluster Node as a Sun Management Center Agent Host Object.

SPARC: How to Add a Cluster Node as a Sun Management Center Agent Host Object

Perform this procedure to create a Sun Management Center agent host object for a cluster node.


Note –

You need only one cluster node host object to use Sun Cluster–module monitoring and configuration functions for the entire cluster. However, if that cluster node becomes unavailable, connection to the cluster through that host object also becomes unavailable. Then you need another cluster-node host object to reconnect to the cluster.


  1. From the Sun Management Center main window, select a domain from the Sun Management Center Administrative Domains pull-down list.

    This domain contains the Sun Management Center agent host object that you create. During Sun Management Center software installation, a Default Domain was automatically created for you. You can use this domain, select another existing domain, or create a new domain.

    See your Sun Management Center documentation for information about how to create Sun Management Center domains.

  2. Choose Edit⇒Create an Object from the pull-down menu.

  3. Click the Node tab.

  4. From the Monitor Via pull-down list, select Sun Management Center Agent - Host.

  5. Fill in the name of the cluster node, for example, phys-schost-1, in the Node Label and Hostname text fields.

    Leave the IP text field blank. The Description text field is optional.

  6. In the Port text field, type the port number that you chose when you installed the Sun Management Center agent machine.

  7. Click OK.

    A Sun Management Center agent host object is created in the domain.

  8. Load the Sun Cluster module.

    Go to SPARC: How to Load the Sun Cluster Module.

SPARC: How to Load the Sun Cluster Module

Perform this procedure to start cluster monitoring.

  1. In the Sun Management Center main window, right–click the icon of a cluster node.

    The pull-down menu is displayed.

  2. Choose Load Module.

    The Load Module window lists each available Sun Management Center module and whether the module is currently loaded.

  3. Choose Sun Cluster: Not Loaded and click OK.

    The Module Loader window shows the current parameter information for the selected module.

  4. Click OK.

    After a few moments, the module is loaded. A Sun Cluster icon is then displayed in the Details window.

  5. In the Details window under the Operating System category, expand the Sun Cluster subtree in either of the following ways:

    • In the tree hierarchy on the left side of the window, place the cursor over the Sun Cluster–module icon and single-click the left mouse button.

    • In the topology view on the right side of the window, place the cursor over the Sun Cluster –module icon and double-click the left mouse button.

  6. See the Sun Cluster–module online help for information about how to use Sun Cluster–module features.

    • To view online help for a specific Sun Cluster–module item, place the cursor over the item. Then click the right mouse button and select Help from the pop-up menu.

    • To access the home page for the Sun Cluster–module online help, place the cursor over the Cluster Info icon. Then click the right mouse button and select Help from the pop-up menu.

    • To directly access the home page for the Sun Cluster–module online help, click the Sun Management Center Help button to launch the help browser. Then go to the following URL:

      file:/opt/SUNWsymon/lib/locale/C/help/main.top.html


    Note –

    The Help button in the Sun Management Center browser accesses online help for Sun Management Center, not the topics specific to the Sun Cluster module.


    See Sun Management Center online help and your Sun Management Center documentation for information about how to use Sun Management Center.

  7. Install third-party applications, register resource types, set up resource groups, and configure data services.

    See the documentation that is supplied with the application software and the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Uninstalling the Software

This section provides the following procedures to uninstall or remove Sun Cluster software:

How to Uninstall Sun Cluster Software to Correct Installation Problems

Perform this procedure if the installed node cannot join the cluster or if you need to correct configuration information. For example, perform this procedure to reconfigure the transport adapters or the private-network address.


Note –

If the node has already joined the cluster and is no longer in installation mode (see Step 2 of How to Verify the Quorum Configuration and Installation Mode), do not perform this procedure. Instead, go to “How to Uninstall Sun Cluster Software From a Cluster Node” in “Adding and Removing a Cluster Node” in Sun Cluster System Administration Guide for Solaris OS.


  1. Attempt to reinstall the node.

    You can correct certain failed installations by repeating Sun Cluster software installation on the node. If you have already tried to reinstall the node without success, proceed to Step 2 to uninstall Sun Cluster software from the node.

  2. Become superuser on an active cluster member other than the node that you are uninstalling.

  3. From an active cluster member, add the node that you intend to uninstall to the cluster node-authentication list.

    Skip this step if you are uninstalling a single-node cluster.


    # /usr/cluster/bin/scconf -a -T node=nodename
    
    -a

    Add

    -T

    Specifies authentication options

    node=nodename

    Specifies the name of the node to add to the authentication list

    Alternately, you can use the scsetup(1M) utility. See “How to Add a Cluster Node to the Authorized Node List” in “Adding and Removing a Cluster Node” in Sun Cluster System Administration Guide for Solaris OS for procedures.

  4. Become superuser on the node that you intend to uninstall.

  5. Shut down the node that you intend to uninstall.


    # shutdown -g0 -y -i0
    
  6. Reboot the node into noncluster mode.

    • On SPARC based systems, do the following:


      ok boot -x
      

    • On x86 based systems, do the following:


                          <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci-ide@7,1/ata@1/cmdk@0,0:b
      Boot args:
      
      Type   b [file-name] [boot-flags] <ENTER>  to boot with options
      or     i <ENTER>                           to enter boot interpreter
      or     <ENTER>                             to boot with defaults
      
                       <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b -x
      

  7. Change to a directory, such as the root (/) directory, that does not contain any files that are delivered by the Sun Cluster packages.


    # cd /
    

  8. Uninstall Sun Cluster software from the node.


    # /usr/cluster/bin/scinstall -r
    

    See the scinstall(1M) man page for more information.

  9. Reinstall and reconfigure Sun Cluster software on the node.

    Refer to Table 2–1 for the list of all installation tasks and the order in which to perform the tasks.

How to Uninstall the SUNWscrdt Package

Perform this procedure on each node in the cluster.

  1. Verify that no applications are using the RSMRDT driver before performing this procedure.

  2. Become superuser on the node to which you want to uninstall the SUNWscrdt package.

  3. Uninstall the SUNWscrdt package.


    # pkgrm SUNWscrdt
    

How to Unload the RSMRDT Driver Manually

If the driver remains loaded in memory after completing How to Uninstall the SUNWscrdt Package, perform this procedure to unload the driver manually.

  1. Start the adb utility.


    # adb -kw
    
  2. Set the kernel variable clifrsmrdt_modunload_ok to 1.


    physmem NNNN 
    clifrsmrdt_modunload_ok/W 1
    
  3. Exit the adb utility by pressing Control-D.

  4. Find the clif_rsmrdt and rsmrdt module IDs.


    # modinfo | grep rdt
    

  5. Unload the clif_rsmrdt module.

    You must unload the clif_rsmrdt module before you unload the rsmrdt module.


    # modunload -i clif_rsmrdt_id
    


    Tip –

    If the modunload command fails, applications are probably still using the driver. Terminate the applications before you run modunload again.


    clif_rsmrdt_id

    Specifies the numeric ID for the module being unloaded.

  6. Unload the rsmrdt module.


    # modunload -i rsmrdt_id
    

    rsmrdt_id

    Specifies the numeric ID for the module being unloaded.

  7. Verify that the module was successfully unloaded.


    # modinfo | grep rdt
    

Example—Unloading the RSMRDT Driver

The following example shows the console output after the RSMRDT driver is manually unloaded.


# adb -kw
physmem fc54
clifrsmrdt_modunload_ok/W 1
clifrsmrdt_modunload_ok: 0x0 = 0x1
^D
# modinfo | grep rsm
 88 f064a5cb 974 - 1 rsmops (RSMOPS module 1.1)
 93 f08e07d4 b95 - 1 clif_rsmrdt (CLUSTER-RSMRDT Interface module)
 94 f0d3d000 13db0 194 1 rsmrdt (Reliable Datagram Transport dri)
# modunload -i 93
# modunload -i 94
# modinfo | grep rsm
 88 f064a5cb 974 - 1 rsmops (RSMOPS module 1.1)
#