Sun Cluster Software Installation Guide for Solaris OS

How to Install Cluster Control Panel Software on an Administrative Console


Note –

You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.


This procedure describes how to install the Cluster Control Panel (CCP) software on an administrative console. The CCP provides a launchpad for the cconsole(1M), ctelnet(1M), and crlogin(1M) tools. Each of these tools provides a multiple-window connection to a set of nodes, as well as a common window. You can use the common window to send input to all nodes at one time.

You can use any desktop machine that runs the Solaris 8 or Solaris 9 operating environment as an administrative console. In addition, you can also use the administrative console as a documentation server. If you are using Sun Cluster on a SPARC based system, you can use the administrative console as a Sun Management Center console or server as well. See Sun Management Center documentation for information on how to install Sun Management Center software. See the Sun Cluster Release Notes for Solaris OS for additional information on how to install Sun Cluster documentation.

  1. Become superuser on the administrative console.

  2. Ensure that a supported version of the Solaris operating environment and any Solaris patches are installed on the administrative console.

    All platforms require at least the End User Solaris Software Group.

  3. Insert the Sun Java Enterprise System 2004Q2 2 of 2 CD-ROM into the CD-ROM drive of the administrative console.

    If the volume management daemon vold(1M) is running and configured to manage CD-ROM devices, the daemon automatically mounts the CD-ROM on the /cdrom/cdrom0 directory.

  4. From the /cdrom/cdrom0 directory, change to the Solaris_arch/Product/sun_cluster/Solaris_ver/Packages directory, where arch is sparc or x86, and where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .


    # cd Solaris_arch/Product/sun_cluster/Solaris_ver/Packages
    

  5. Install the SUNWccon package.


    # pkgadd -d . SUNWccon
    

  6. (Optional) Install the SUNWscman package.


    # pkgadd -d . SUNWscman
    

    When you install the SUNWscman package on the administrative console, you can view Sun Cluster man pages from the administrative console before you install Sun Cluster software on the cluster nodes.

  7. (Optional) Install the Sun Cluster documentation packages.

    If you do not install the documentation on your administrative console, you can still view HTML or PDF documentation directly from the CD-ROM.

    1. Start the pkgadd utility in interactive mode.


      # pkgadd -d .
      

    2. Select the Documentation Navigation for Solaris 9 package, if it has not already been installed on the administrative console.

    3. Select the Sun Cluster documentation packages to install.

      The following documentation collections are available in both HTML and PDF format:

      • Sun Cluster 3.1 4/04 Software Collection for Solaris OS (SPARC Platform Edition)

      • Sun Cluster 3.1 4/04 Software Collection for Solaris OS (x86 Platform Edition)

      • Sun Cluster 3.x Hardware Collection for Solaris OS (SPARC Platform Edition)

      • Sun Cluster 3.x Hardware Collection for Solaris OS (x86 Platform Edition)

      • Sun Cluster 3.1 4/04 Reference Collection for Solaris OS

    4. Follow onscreen instructions to continue package installation.

  8. Unload the Sun Java Enterprise System 2004Q2 2 of 2 CD-ROM from the CD-ROM drive.

    1. To ensure that the CD-ROM is not being used, change to a directory that does not reside on the CD-ROM.

    2. Eject the CD-ROM.


      # eject cdrom
      
  9. Create an /etc/clusters file on the administrative console.

    Add your cluster name and the physical node name of each cluster node to the file.


    # vi /etc/clusters
    clustername node1 node2
    

    See the /opt/SUNWcluster/bin/clusters(4) man page for details.

  10. Create an /etc/serialports file.

    Add an entry for each node in the cluster to the file. Specify the physical node name, the hostname of the console-access device, and the port number. Examples of a console-access device are a terminal concentrator (TC), a System Service Processor (SSP), and a Sun Fire system controller.


    # vi /etc/serialports
    node1 ca-dev-hostname port
    node2 ca-dev-hostname port
    
    node1, node2

    Physical names of the cluster nodes

    ca-dev-hostname

    Hostname of the console-access device

    port

    Serial port number

    Note these special instructions to create an /etc/serialports file:

    • For a Sun Fire 15000 system controller, use telnet(1) port number 23 for the serial port number of each entry.

    • For all other console-access devices, use the telnet serial port number, not the physical port number. To determine the telnet serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the telnet serial port number is 5006.

    • For Sun Enterprise 10000 servers, also see the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations.

  11. (Optional) For convenience, set the directory paths on the administrative console.

    • Add the /opt/SUNWcluster/bin directory to the PATH.

    • Add the /opt/SUNWcluster/man directory to the MANPATH.

    • If you installed the SUNWscman package, also add the /usr/cluster/man directory to the MANPATH.

  12. Start the CCP utility.


    # /opt/SUNWcluster/bin/ccp &
    

    Click the cconsole, crlogin, or ctelnet button in the CCP window to launch that tool. Alternately, you can start any of these tools directly. For example, to start ctelnet, type the following command:


    # /opt/SUNWcluster/bin/ctelnet &
    

    See the procedure “How to Remotely Log In to Sun Cluster” in “Beginning to Administer the Cluster” in Sun Cluster System Administration Guide for Solaris OS for additional information about how to use the CCP utility. Also see the ccp(1M) man page.

  13. Is the Solaris operating environment already installed on each cluster node to meet Sun Cluster software requirements?