Sun Cluster 3.0 Installation Guide

Installing the Software

Before you begin, read the following manuals for information that will help you plan your cluster configuration and prepare your installation strategy.

The following table lists the tasks you perform to install the software.

Table 2-1 Task Map: Installing the Software

Task 

For Instructions, Go To ... 

Plan the layout of your cluster configuration. 

Chapter 1, Planning the Sun Cluster Configuration and "Configuration Worksheets and Examples" in Sun Cluster 3.0 Release Notes

(Optional) Install the Cluster Control Panel (CCP) software on the administrative console.

"How to Install Cluster Control Panel Software on the Administrative Console"

Install the Solaris operating environment and Sun Cluster software using one of two methods. 

 

 

Method 1 - Install Solaris software, then install the Sun Cluster software by using the scinstall utility.

"How to Install the Solaris Operating Environment" and "How to Install Sun Cluster Software and Establish New Cluster Nodes"

Method 2 - Install Solaris software and Sun Cluster software in one operation by using the scinstall utility custom JumpStart option.

"How to Use JumpStart to Install the Solaris Operating Environment and Establish New Cluster Nodes"

Configure the name service look-up order. 

"How to Configure the Name Service Switch"

Install volume manager software. 

 

 

Install Solstice DiskSuite software. 

"How to Install Solstice DiskSuite Software" and Solstice DiskSuite documentation

Install VERITAS Volume Manager software.  

"How to Install VERITAS Volume Manager Software" and VERITAS Volume Manager documentation

Set up directory paths. 

"How to Set Up the Root User's Environment"

Install data service software packages. 

"How to Install Data Service Software Packages"

Configure the cluster. 

"Configuring the Cluster"

How to Install Cluster Control Panel Software on the Administrative Console

This procedure describes how to install the Cluster Control Panel (CCP) software on the administrative console. The CCP provides a launchpad for the cconsole(1M), ctelnet(1M), and crlogin(1M) tools. Each of these tools provides a multiple-window connection to a set of nodes, plus a common window that sends input to all nodes at one time.

You can use any desktop machine running the Solaris 8 operating environment as an administrative console. In addition, you can also use the administrative console as a Sun Management Center console and/or server, and as an AnswerBook server. Refer to Sun Management Center documentation for information on installing Sun Management Center software. Refer to Sun Cluster 3.0 Release Notes for information on installing an AnswerBook server.


Note -

You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.


  1. Ensure that the Solaris 8 operating environment and any Solaris patches are installed on the administrative console.

    All platforms require Solaris 8 with at least the End User System Support software group.

  2. If you are installing from the CD-ROM, insert the Sun Cluster 3.0 CD-ROM into the CD-ROM drive of the administrative console.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0 directory.

  3. Change to the /cdrom_image/suncluster_3_0/SunCluster_3.0/Packages directory.


    # cd /cdrom_image/suncluster_3_0/SunCluster_3.0/Packages
    
  4. Install the SUNWccon package.


    # pkgadd -d . SUNWccon
    
  5. (Optional) Install the SUNWscman package.


    # pkgadd -d . SUNWscman
    

    Installing the SUNWscman package on the administrative console enables you to view Sun Cluster man pages from the administrative console prior to installing Sun Cluster software on the cluster nodes.

  6. If you installed from a CD-ROM, eject the CD-ROM.

  7. Create an /etc/clusters file.

    Add your cluster name and the physical node name of each cluster node to the file.


    # vi /etc/clusters
    clustername node1 node2
    

    See the /opt/SUNWcluster/bin/clusters(4) man page for details.

  8. Create an /etc/serialports file.

    Add the physical node name of each cluster node, the terminal concentrator (TC) or System Service Processor (SSP) name, and the serial port numbers to the file.


    Note -

    Use the telnet(1) port numbers, not the physical port numbers, for the serial port numbers in the /etc/serialports file. Determine the serial port number by adding 5000 to the physical port number. For example, if a physical port number is 6, the serial port number should be 5006.



    # vi /etc/serialports
    node1 TC_hostname 500n
    node2 TC_hostname 500n
    

    See the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations for the Sun Enterprise E10000 server.

  9. For convenience, add the /opt/SUNWcluster/bin directory to the PATH and the /opt/SUNWcluster/man directory to the MANPATH on the administrative console.

    If you installed the SUNWscman package, also add the /usr/cluster/man directory to the MANPATH.

  10. Start the CCP utility.


    # /opt/SUNWcluster/bin/ccp clustername
    

    Refer to the procedure "How to Remotely Log In to Sun Cluster" in Sun Cluster 3.0 System Administration Guide and the /opt/SUNWcluster/bin/ccp(1M) man page for information about using the CCP.

Where to Go From Here

To install Solaris software, go to "How to Install the Solaris Operating Environment". To use the scinstall custom JumpStart option to install Solaris and Sun Cluster software, go to "How to Use JumpStart to Install the Solaris Operating Environment and Establish New Cluster Nodes".

How to Install the Solaris Operating Environment

If you are not using the scinstall(1M) custom JumpStart installation method to install software, perform this task on each node in the cluster.

  1. Ensure that the hardware setup is complete and connections are verified before installing Solaris software.

    Refer to Sun Cluster 3.0 Hardware Guide and your server and storage device documentation for details.

  2. On each node of the cluster, determine whether the local-mac-address variable is correctly set to false.


    # /usr/sbin/eeprom local-mac-address?
    
    • If the command returns local-mac-address=false, the variable setting is correct. Proceed to Step 3.

    • If the command returns local-mac-address=true, change the setting to false.


      # /usr/sbin/eeprom local-mac-address?=false
      

      The new setting becomes effective at the next system reboot.

  3. Have available your completed "Local File System Layout Worksheet" from Sun Cluster 3.0 Release Notes.

  4. Update naming services.

    Add address-to-name mappings for all public hostnames and logical addresses to any naming services (such as NIS, NIS+, or DNS) used by clients for access to cluster services. See "IP Addresses" for planning guidelines.

    You also add these addresses to the local /etc/inet/hosts file on each node during the procedure "How to Configure the Name Service Switch".

  5. If you are using a cluster administrative console, display a console screen for each node in the cluster.

    If the Cluster Control Panel is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. Otherwise, you must connect to the consoles of each node individually.

    To save time, you can install the Solaris operating environment on each node at the same time. Use the cconsole utility to install all nodes at once.

  6. Are you installing a new node to an existing cluster?

    • If no, proceed to Step 7.

    • If yes, perform the following steps to create a mount point on the new node for each cluster file system in the cluster.

    1. From another, active node of the cluster, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk `{print $1}'
      
    2. On the node you are adding to the cluster, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if a file system name returned by the mount command was /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node being added to the cluster.

  7. Install the Solaris operating environment as instructed in the Solaris installation documentation.


    Note -

    You must install all nodes in a cluster with the same version of the Solaris operating environment.


    You can use any method normally used for installing the Solaris operating environment to install the software on new nodes being installed into a clustered environment. These methods include the Solaris interactive installation program, Solaris JumpStart, and Solaris Web Start.

    During installation, do the following.

    • Install at least the End User System Support software group. You might need to install other Solaris software packages which are not part of the End User System Support software group, for example, the Apache HTTP server packages. Third-party software, such as Oracle, might also require additional Solaris packages. Refer to third-party documentation for any Solaris software requirements.


      Note -

      Sun Enterprise E10000 servers require the Entire Distribution + OEM software group.


    • Create a file system of at least 100 MBytes with its mount point set as /globaldevices, as well as any file-system partitions needed to support your volume manager software. Refer to "System Disk Partitions" for partitioning guidelines to support Sun Cluster software.


      Note -

      The /globaldevices file system is required for Sun Cluster software installation to succeed.


    • Answer no when asked if you want automatic power-saving shutdown. You must disable automatic shutdown in Sun Cluster configurations. Refer to the pmconfig(1M) and power.conf(4) man pages for more information.

    • For ease of administration, set the same root password on each node.


    Note -

    The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. Refer to the ifconfig(1M) man page for more information about Solaris interface groups.


  8. Install any Solaris software patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

  9. Install any hardware-related patches and download any needed firmware contained in the hardware patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

Where to Go From Here

To install Sun Cluster software on your cluster nodes, go to "How to Install Sun Cluster Software and Establish New Cluster Nodes".

How to Install Sun Cluster Software and Establish New Cluster Nodes

After installing the Solaris operating environment, perform this task on each node of the cluster.


Note -

If you used the scinstall(1M) custom JumpStart method to install software, the Sun Cluster software is already installed. Proceed to "How to Configure the Name Service Switch".


  1. Have available the following completed configuration planning worksheets from Sun Cluster 3.0 Release Notes.

    • "Cluster and Node Names Worksheet"

    • "Cluster Interconnect Worksheet"

    See Chapter 1, Planning the Sun Cluster Configuration for planning guidelines.

  2. Become superuser on the cluster node.

  3. If you are installing from the CD-ROM, insert the Sun Cluster 3.0 CD-ROM into the CD-ROM drive of the node you want to install and configure.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0 directory.

  4. Change to the /cdrom_image/suncluster_3_0/SunCluster_3.0/Tools directory.


    # cd /cdrom_image/suncluster_3_0/SunCluster_3.0/Tools
    
  5. Start the scinstall(1M) utility.


    ./scinstall
    

    Follow these guidelines while using the interactive scinstall utility.

    • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, pressing Control-D will return you either to the start of a series of related questions or to the Main Menu.

    • Your session answers are stored as defaults for the next time you run this menu option.

    • Until the node has successfully booted in cluster mode, you can rerun scinstall and change the configuration information as needed. However, if bad configuration data for the node has been pushed over to the established portion of the cluster, you might first need to remove the bad information. To do this, log in to one of the active cluster nodes, then use the scsetup(1M) utility to remove the bad adapter, junction, or cable information.

  6. To install the first node and establish the new cluster, type 1 (Establish a new cluster).

    Follow the prompts to install Sun Cluster software, using the information from your configuration planning worksheets. You will be asked for the following information.

    • Cluster name

    • Names of the other nodes that will become part of this cluster

    • Node authentication

    • Private network address and netmask--You cannot change the private network address after the cluster has successfully formed

    • Cluster interconnect (transport adapters and transport junctions)--You can configure no more than two adapters by using the scinstall command, but you can configure more adapters later by using the scsetup utility

    • Global devices file-system name

    • Automatic reboot--Do not choose automatic reboot if you have Sun Cluster software patches to install

    When you finish answering the prompts, the scinstall command generated from your input is displayed for confirmation. If you choose not to accept the command, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 1 and provide different answers. Your previous entries are displayed as the defaults.


    Note -

    Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. Because the default file is shipped with references to eight nodes, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See "How to Update Network Time Protocol (NTP)" for information on how to suppress these messages under otherwise normal cluster conditions.


  7. To install the second node of the cluster, type 2 (Add this machine as a node).

    You can start this step while the first node is still being installed.

    Follow the prompts to install Sun Cluster software, using the information from your configuration planning worksheets. You will be asked for the following information.

    • Name of an existing cluster node, referred to as the sponsor node

    • Cluster name

    • Cluster interconnect (transport adapters and transport junctions)

    • Global devices file-system name

    • Automatic reboot--Do not choose automatic reboot if you have Sun Cluster software patches to install

    When you finish answering the prompts, the scinstall command generated from your input is displayed for confirmation. If you choose not to accept the command, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 2 and provide different answers. Your previous answers are displayed as the defaults.

    If you choose to continue installation and the sponsor node is not yet established, scinstall waits for the sponsor node to become available.

  8. Repeat Step 7 on each additional node until all nodes are fully configured.

    You do not need to wait for the second node to complete installation before beginning installation on additional nodes.

  9. Install any Sun Cluster software patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

  10. If you installed Sun Cluster software patches, shut down the cluster, then reboot each node in the cluster.

    Before rebooting the first node of the cluster, shut down the cluster by using the scshutdown command. Until the cluster nodes are removed from install mode, only the first node, which establishes the cluster (the sponsor node), has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down.

    Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".

Example--Installing Sun Cluster Software

The following example shows the progress messages displayed as scinstall installation tasks are completed on the node phys-schost-1, which is the first node to be installed in the cluster.


** Installing SunCluster 3.0 **
        SUNWscr.....done.
        SUNWscdev...done.
        SUNWscu.....done.
        SUNWscman...done.
        SUNWscsal...done.
        SUNWscsam...done.
        SUNWscrsmop.done.
        SUNWsci.....done.
        SUNWscid....done.
        SUNWscidx...done.
        SUNWscvm....done.
        SUNWmdm.....done.
 
Initializing cluster name to "sccluster" ... done
Initializing authentication options ... done
Initializing configuration for adapter "hme2" ... done
Initializing configuration for adapter "hme4" ... done
Initializing configuration for junction "switch1" ... done
Initializing configuration for junction "switch2" ... done
Initializing configuration for cable ... done
Initializing configuration for cable ... done
Setting the node ID for "phys-schost-1" ... done (id=1)
 
Checking for global devices global file system ... done
Checking device to use for global devices file system ... done
Updating vfstab ... done
 
Verifying that NTP is configured ... done
Installing a default NTP configuration ... done
Please complete the NTP configuration after scinstall has finished.
 
Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done
 
Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done
 
Verifying that power management is NOT configured ... done
Unconfiguring power management ... done
/etc/power.conf has been renamed to /etc/power.conf.060199105132
Power management is incompatible with the HA goals of the cluster.
Please do not attempt to re-configure power management.
 
Ensure routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Sun Cluster.
Please do not re-enable network routing.
 
Log file - /var/cluster/logs/install/scinstall.log.276
 
Rebooting ... 

Where to Go From Here

To set up the name service look-up order, go to "How to Configure the Name Service Switch".

How to Use JumpStart to Install the Solaris Operating Environment and Establish New Cluster Nodes

Perform this procedure to use the custom JumpStart installation method. This method installs the Solaris operating environment and Sun Cluster software on all cluster nodes in a single operation.

  1. Ensure that the hardware setup is complete and connections are verified before installing Solaris software.

    Refer to Sun Cluster 3.0 Hardware Guide and your server and storage device documentation for details on setting up the hardware.

  2. On each node of the cluster, determine whether the local-mac-address variable is correctly set to false.


    # /usr/sbin/eeprom local-mac-address?
    
    • If the command returns local-mac-address=false, the variable setting is correct. Proceed to Step 3.

    • If the command returns local-mac-address=true, change the setting to false.


      # /usr/sbin/eeprom local-mac-address?=false
      

      The new setting becomes effective at the next system reboot.

  3. Have available the following information.

    • The Ethernet address of each cluster node

    • The following completed configuration planning worksheets from Sun Cluster 3.0 Release Notes.

      • "Local File System Layout Worksheet"

      • "Cluster and Node Names Worksheet"

      • "Cluster Interconnect Worksheet"

    See Chapter 1, Planning the Sun Cluster Configuration for planning guidelines.

  4. Update naming services.

    Add address-to-name mappings for all public hostnames and logical addresses, as well as the IP address and hostname of the JumpStart server, to any naming services (such as NIS, NIS+, or DNS) used by clients for access to cluster services. See "IP Addresses" for planning guidelines. You also add these addresses to the local /etc/inet/hosts file on each node during the procedure "How to Configure the Name Service Switch".


    Note -

    If you do not use a name service, create jumpstart-dir/autoscinstall.d/nodes/nodename/archive/etc/inet/hosts files on the JumpStart install server, one file for each node of the cluster, where nodename is the name of a node of the cluster. Add the address-to-name mappings there.


  5. As superuser, set up the JumpStart install server for Solaris operating environment installation.

    Refer to the setup_install_server(1M) and add_install_client(1M) man pages and Solaris Advanced Installation Guide for instructions on setting up a JumpStart install server.

    When setting up the install server, ensure that the following requirements are met.

    • The install server is on the same subnet as the cluster nodes, but is not itself a cluster node.

    • The install server installs the release of the Solaris operating environment required by the Sun Cluster software.

    • A custom JumpStart directory exists for JumpStart installation of Sun Cluster. This jumpstart-dir directory must contain a copy of the check(1M) utility and be NFS exported for reading by the JumpStart install server.

    • Each new cluster node is configured as a custom JumpStart install client using the custom JumpStart directory set up for Sun Cluster installation.

  6. (Optional) Create a directory on the JumpStart install server to hold your copies of the Sun Cluster and Sun Cluster data services CD-ROMs.

    In the following example, the /export/suncluster directory is created for this purpose.


    # mkdir -m 755 /export/suncluster
    
  7. Copy the Sun Cluster CD-ROM to the JumpStart install server.

    1. Insert the Sun Cluster 3.0 CD-ROM into the CD-ROM drive on the JumpStart install server.

      If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0 directory.

    2. Change to the /cdrom_image/suncluster_3_0/SunCluster_3.0/Tools directory.


      # cd /cdrom_image/suncluster_3_0/SunCluster_3.0/Tools
      

    3. Copy the CD-ROM to a new directory on the JumpStart install server.

      The scinstall command creates the new install directory as it copies the CD-ROM files. The install directory name /export/suncluster/sc30 is used here as an example.


      ./scinstall -a /export/suncluster/sc30
      

    4. Eject the CD-ROM.


      # cd /
      # eject cdrom
      
    5. Ensure that the Sun Cluster 3.0 CD-ROM image on the JumpStart install server is NFS exported for reading by the JumpStart install server.

      Refer to NFS Administration Guide and the share(1M) and dfstab(4) man pages for more information about automatic file sharing.

  8. From the JumpStart install server, start the scinstall(1M) utility.

    The path /export/suncluster/sc30 is used here as an example of the install directory you created.


    # cd /export/suncluster/sc30/SunCluster_3.0/Tools./scinstall
    

    Follow these guidelines while using the interactive scinstall utility.

    • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, pressing Control-D returns you either to the start of a series of related questions or to the Main Menu.

    • Your session answers are stored as defaults for the next time you run this menu option.

  9. To choose JumpStart installation, type 3 (Configure a cluster to be JumpStarted from this install server).


    Note -

    If option 3 does not have an asterisk in front, this omission indicates the option is disabled because JumpStart setup is not complete or has an error. Exit the scinstall utility, correct JumpStart setup, then restart the scinstall utility.


    Follow the prompts to specify Sun Cluster configuration information.

    • JumpStart directory name

    • Cluster name

    • Cluster node names

    • Node authentication

    • Private network address and netmask--You cannot change the private network address after the cluster has successfully formed

    • Cluster interconnect (transport adapters and transport junctions)--You can configure no more than two adapters by using the scinstall command, but you can configure additional adapters later by using the scsetup utility

    • Global devices file-system name

    • Automatic reboot--Do not choose automatic reboot if you have Sun Cluster software patches to install

    When finished, the scinstall commands generated from your input are displayed for confirmation. If you choose not to accept one of them, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 3 and provide different answers. Your previous entries are displayed as the defaults.

  10. If necessary, make adjustments to the default class file, or profile, created by scinstall.

    The scinstall command creates the following autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.0 directory.


    install_type    initial_install
    system_type     standalone
    partitioning    explicit
    filesys         rootdisk.s0 free /
    filesys         rootdisk.s1 750 swap
    filesys         rootdisk.s3 100  /globaldevices
    filesys         rootdisk.s7 10
    cluster         SUNWCuser       add
    package         SUNWman         add


    Note -

    The default class file installs the End User System Support software group (SUNWCuser) of Solaris software. For Sun Enterprise E10000 servers, you must install the Entire Distribution + OEM software group. Also, some third-party software, such as Oracle, might require additional Solaris packages. Refer to third-party documentation for any Solaris software requirements.


    You can change the profile in one of the following ways.

    • Edit the autoscinstall.class file directly. These changes are applied to all nodes in all clusters that use this custom JumpStart directory.

    • Update the rules file to point to other profiles, then run the check utility to validate the rules file.

    As long as minimum file-system allocation requirements are met, no restrictions are imposed on changes to the Solaris operating environment install profile. Refer to "System Disk Partitions" for partitioning guidelines and requirements to support Sun Cluster 3.0 software.

  11. Are you installing a new node to an existing cluster?

    • If no, proceed to Step 12.

    • If yes, perform the following steps to create a mount point on the new node for each cluster file system in the cluster.

    1. From another, active node of the cluster, display the names of all cluster file systems.


      % mount | grep global | egrep -v node@ | awk `{print $1}'
      
    2. On the node you are adding to the cluster, create a mount point for each cluster file system in the cluster.


      % mkdir -p mountpoint
      

      For example, if a file system name returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node being added to the cluster.

  12. Set up Solaris patch directories.

    1. Create jumpstart-dir/autoscinstall.d/nodes/nodename/patches directories on the JumpStart install server, one directory for each node in the cluster, where nodename is the name of a cluster node.


      # mkdir jumpstart-dir/autoscinstall.d/nodes/nodename/patches
      
    2. Place copies of any Solaris patches into each of these directories. Also place copies of any hardware-related patches that must be installed after Solaris software is installed into each of these directories.

  13. If you do not use a name service, set up files to contain the necessary hostname information.

    1. On the JumpStart install server, create files named jumpstart-dir/autoscinstall.d/nodes/nodename/archive/etc/inet/hosts.

      Create one file for each node, where nodename is the name of a cluster node.

    2. Add the following entries into each file.

      • IP address and hostname of the NFS server that holds a copy of the Sun Cluster CD-ROM image. This could be the JumpStart install server or another machine.

      • IP address and hostname of each node in the cluster.

  14. (Optional) Add your own post-installation finish script.

    You can add your own finish script, which is run after the standard finish script installed by the scinstall command.

    1. Name your finish script finish.

    2. Copy your finish script to the jumpstart-dir/autoscinstall.d/nodes/nodename directory, one directory for each node in the cluster.

  15. If you are using an administrative console, display a console screen for each node in the cluster.

    If cconsole(1M) is installed and configured on your administrative console, you can use it to display the individual console screens. Otherwise, you must connect to the consoles of each node individually.

  16. From the ok PROM prompt on the console of each node, type the boot net - install command to begin the network JumpStart installation of each node.


    Note -

    The dash (-) in the command must be surrounded by a space on each side.



    ok boot net - install
    

    Note -

    Unless you have installed your own ntp.conf file in the /etc/inet directory, the scinstall command installs a default ntp.conf file for you. Because the default file is shipped with references to eight nodes, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See "How to Update Network Time Protocol (NTP)" for information on how to suppress these messages under otherwise normal cluster conditions.


    When the installation is successfully completed, each node is fully installed as a new cluster node.


    Note -

    The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. Refer to the ifconfig(1M) man page for more information about Solaris interface groups.


  17. Install any Sun Cluster software patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

  18. If you installed Sun Cluster software patches, shut down the cluster, then reboot each node in the cluster.

    Before rebooting the first node of the cluster, shut down the cluster by using the scshutdown command. Until the cluster nodes are removed from install mode, only the first node, which establishes the cluster (the sponsor node), has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down.

    Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".

Where to Go From Here

To set up the name service look-up order, go to "How to Configure the Name Service Switch".

How to Configure the Name Service Switch

Perform this task on each node in the cluster.

  1. Become superuser on the cluster node.

  2. Edit the /etc/nsswitch.conf file.

    1. Verify that cluster is the first source look-up for the hosts and netmasks database entries.

      This order is necessary for Sun Cluster software to function properly. The scinstall(1M) command adds cluster to these entries during installation.

    2. (Optional) For the hosts and netmasks database entries, follow cluster with files.

    3. (Optional) For all other database entries, place files first in look-up order.


    Note -

    Performing Step b and Step c can increase availability to data services if the naming service becomes unavailable.


    The following example shows partial contents of an /etc/nsswitch.conf file. The look-up order for the hosts and netmasks database entries is first cluster, then files. The look-up order for other entries begins with files.


    # vi /etc/nsswitch.conf
    ...
    passwd:     files nis
    group:      files nis
    ...
    hosts:      cluster files nis
    ...
    netmasks:   cluster files nis
    ...

  3. Update the /etc/inet/hosts file with all public hostnames and logical addresses for the cluster.

Where to Go From Here

To install Solstice DiskSuite volume manager software, go to "How to Install Solstice DiskSuite Software". To install VERITAS Volume Manager volume manager software, go to "How to Install VERITAS Volume Manager Software".

How to Install Solstice DiskSuite Software

Perform this task on each node in the cluster.

  1. Become superuser on the cluster node.

  2. If you are installing from the CD-ROM, insert the Solaris 8 Software 2 of 2 CD-ROM into the CD-ROM drive on the node.


    Note -

    Solstice DiskSuite software packages are now located on the Solaris 8 software CD-ROM.


    This step assumes that the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices.

  3. Install the Solstice DiskSuite software packages.


    Note -

    If you have Solstice DiskSuite software patches to install, do not reboot after installing the Solstice DiskSuite software.


    Install software packages in the order shown in the following example.


    # cd /cdrom_image/sol_8_sparc_2/Solaris_8/EA/products/DiskSuite_4.2.1/sparc/Packagespkgadd -d . SUNWmdr SUNWmdu [SUNWmdx] optional-pkgs
    

    The SUNWmdr and SUNWmdu packages are required for all Solstice DiskSuite installations. The SUNWmdx package is also required for the 64-bit Solstice DiskSuite installation. Refer to your Solstice DiskSuite installation documentation for information about optional software packages.

  4. If you installed from a CD-ROM, eject the CD-ROM.

  5. If not already installed, install any Solstice DiskSuite patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

  6. Manually populate the global device namespace for Solstice DiskSuite by running the /usr/cluster/bin/scgdevs command.

  7. If you installed Solstice DiskSuite software patches, shut down the cluster, then reboot each node in the cluster.

    Before rebooting the first node of the cluster, shut down the cluster by using the scshutdown command. Until the cluster nodes are removed from install mode, only the first node, which establishes the cluster (the sponsor node), has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down.

    Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".

Refer to your Solstice DiskSuite installation documentation for complete information about installing Solstice DiskSuite software.

Where to Go From Here

To set up your root user's environment, go to "How to Set Up the Root User's Environment".

How to Install VERITAS Volume Manager Software

Perform this task on each node in the cluster.

  1. Become superuser on the cluster node.

  2. Disable Dynamic Multipathing (DMP).


    # mkdir /dev/vx
    # ln -s /dev/dsk /dev/vx/dmp
    # ln -s /dev/rdsk /dev/vx/rdmp
    
  3. Insert the VxVM CD-ROM into the CD-ROM drive on the node.

  4. Install the VxVM software packages.


    Note -

    If you have VxVM software patches to install, do not reboot after installing the VxVM software.



    # cd /cdrom_image/volume_manager_3_0_4_solaris/pkgs
    # pkgadd -d . VRTSvxvm VRTSvmdev VRTSvmman
    

    List VRTSvxvm first in the pkgadd(1M) command and VRTSvmdev second. Refer to your VxVM installation documentation for descriptions of the other VxVM software packages.


    Note -

    The VRTSvxvm and VRTSvmdev packages are required for all VxVM installations.


  5. Eject the CD-ROM.

  6. Install any VxVM patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.

  7. If you installed VxVM software patches, shut down the cluster, then reboot each node in the cluster.

    Before rebooting the first node of the cluster, shut down the cluster by using the scshutdown command. Until the cluster nodes are removed from install mode, only the first node, which establishes the cluster (the sponsor node), has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down.

    Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".

Refer to your VxVM installation documentation for complete information about installing VxVM software.

Where to Go From Here

To set up your root user's environment, go to "How to Set Up the Root User's Environment".

How to Set Up the Root User's Environment

Perform these tasks on each node in the cluster.

  1. Become superuser on the cluster node.

  2. Set the PATH to include /usr/sbin and /usr/cluster/bin.

    For VERITAS Volume Manager, also set your PATH to include /etc/vx/bin. If you installed the VRTSvmsa package, also add /opt/VRTSvmsa/bin to your PATH.

  3. Set the MANPATH to include /usr/cluster/man. Also include the volume manager-specific paths.

    • For Solstice DiskSuite software, set your MANPATH to include /usr/share/man.

    • For VERITAS Volume Manager, set your MANPATH to include /opt/VRTSvxvm/man. If you installed the VRTSvmsa package, also add /opt/VRTSvmsa/man to your MANPATH.

  4. (Optional) For ease of administration, set the same root password on each node, if you have not already done so.

Where to Go From Here

To install data service software packages, go to "How to Install Data Service Software Packages".

How to Install Data Service Software Packages

Perform this task on each cluster node.


Note -

You must install the same set of data service packages on each node, even if a node is not expected to host resources for an installed data service.


  1. Become superuser on the cluster node.

  2. If you are installing from the CD-ROM, insert the Data Services CD-ROM into the CD-ROM drive on the node.

  3. Start the scinstall(1M) utility.


    # scinstall
    

    Follow these guidelines while using the interactive scinstall utility.

    • Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.

    • Unless otherwise noted, pressing Control-D returns you either to the start of a series of related questions or to the Main Menu.

  4. To add data services, type 4 (Add support for a new data service to this cluster node).

    Follow the prompts to select all data services you want to install.

  5. If you installed from a CD-ROM, eject the CD-ROM.

  6. Install any Sun Cluster data service patches.

    Refer to Sun Cluster 3.0 Release Notes for the location of patches and installation instructions.


    Note -

    You do not have to reboot after installing Sun Cluster data service patches, unless specified by the patch special instructions. If a patch instruction requires that you reboot, before rebooting the first node of the cluster, shut down the cluster by using the scshutdown command. Until the cluster nodes are removed from install mode, only the first node, which establishes the cluster (the sponsor node), has a quorum vote. In an established cluster which is still in install mode, if the cluster is not shut down before the first node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down. Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup".


Where to Go From Here

For post-installation setup and configuration tasks, see "Configuring the Cluster".