This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris and Sun Cluster software on all cluster nodes in a single operation and establishes the cluster. You can also use this procedure to add new nodes to an existing cluster.
Ensure that the hardware setup is complete and that connections are verified before you install Solaris software.
See the Sun Cluster 3.1 Hardware Administration Collection and your server and storage device documentation for details on how to set up the hardware.
Ensure that your cluster configuration planning is complete.
See How to Prepare for Cluster Software Installation for requirements and guidelines.
Have available the following information:
The Ethernet address of each cluster node
The following completed configuration planning worksheets:
See Planning the Solaris Operating Environment and Planning the Sun Cluster Environment for planning guidelines.
Do you use a naming service?
If no, proceed to Step 5. You set up the necessary hostname information in Step 30.
If yes, add the following information to any naming services that clients use to access cluster services:
Address-to-name mappings for all public hostnames and logical addresses
The IP address and hostname of the JumpStart server
See IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.
Are you installing a new node to an existing cluster?
If no, proceed to Step 6.
If yes, run scsetup(1M) from another cluster node that is active and add the new node's name to the list of authorized cluster nodes. For more information, see “How to Add a Cluster Node to the Authorized Node List” in “Adding and Removing a Cluster Node” in Sun Cluster 3.1 10/03 System Administration Guide.
As superuser, set up the JumpStart installation server for Solaris operating-environment installation.
See “Preparing Custom JumpStart Installations” in Solaris 8 Advanced Installation Guide or “Preparing Custom JumpStart Installations (Tasks)” in Solaris 9 Installation Guide for instructions on how to set up a JumpStart installation server. See also the setup_install_server(1M) and add_install_client(1M) man pages.
When you set up the installation server, ensure that the following requirements are met.
The installation server is on the same subnet as the cluster nodes but is not itself a cluster node.
The installation server installs the release of the Solaris operating environment required by the Sun Cluster software.
A custom JumpStart directory exists for JumpStart installation of Sun Cluster. This jumpstart-dir directory must contain a copy of the check(1M) utility and be NFS exported for reading by the JumpStart installation server.
Each new cluster node is configured as a custom JumpStart install client that uses the custom JumpStart directory set up for Sun Cluster installation.
Create a directory on the JumpStart installation server to hold your copy of the Sun Cluster 3.1 10/03 CD-ROM. Skip this step if a directory already exists.
In the following example, the /export/suncluster directory is created for this purpose.
# mkdir -m 755 /export/suncluster |
Copy the Sun Cluster CD-ROM to the JumpStart installation server.
Insert the Sun Cluster 3.1 10/03 CD-ROM into the CD-ROM drive on the JumpStart installation server.
If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_1_u1 directory.
Change to the /cdrom/suncluster_3_1/SunCluster_3.1/Sol_ver/Tools directory, where ver is 8 (for Solaris 8) or 9 (for Solaris 9) .
The following example uses the path to the Solaris 8 version of Sun Cluster software.
# cd /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_8/Tools |
Copy the CD-ROM to a new directory on the JumpStart installation server.
The scinstall command creates the new installation directory when the command copies the CD-ROM files. The installation directory name /export/suncluster/sc31 is used here as an example.
# ./scinstall -a /export/suncluster/sc31 |
Eject the CD-ROM.
# cd / # eject cdrom |
Ensure that the Sun Cluster 3.1 10/03 CD-ROM image on the JumpStart installation server is NFS exported for reading by the JumpStart installation server.
See “Solaris NFS Environment” in System Administration Guide, Volume 3 or “Managing Network File Systems (Overview)” in System Administration Guide: Resource Management and Network Services for more information about automatic file sharing. See also the share(1M) and dfstab(4) man pages.
From the JumpStart installation server, start the scinstall(1 M) utility.
The path /export/suncluster/sc31 is used here as an example of the installation directory that you created.
# cd /export/suncluster/sc31/SunCluster_3.1/Sol_ver/Tools # ./scinstall |
In the CD-ROM path, replace ver with 8 (for Solaris 8) or 9 (for Solaris 9).
Follow these guidelines to use the interactive scinstall utility.
Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
From the Main Menu, type 2 (Configure a cluster to be JumpStarted from this installation server).
This option is used to configure custom JumpStart finish scripts. JumpStart uses these finish scripts to install the Sun Cluster software.
*** Main Menu *** Please select from one of the following (*) options: * 1) Install a cluster or cluster node * 2) Configure a cluster to be JumpStarted from this install server 3) Add support for new data services to this cluster node * 4) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 2 *** Custom JumpStart *** … Do you want to continue (yes/no) [yes]? |
If option 2 does not have an asterisk in front, the option is disabled. This condition indicates that JumpStart setup is not complete or the setup has an error. Exit the scinstall utility, repeat Step 6 through Step 8 to correct JumpStart setup, then restart the scinstall utility.
Specify the JumpStart directory name.
The JumpStart directory name /export/suncluster/sc31 is used here as an example.
>>> Custom JumpStart Directory <<< … What is your JumpStart directory name? /export/suncluster/sc31 |
Specify the name of the cluster.
>>> Cluster Name <<< … What is the name of the cluster you want to establish? clustername |
Specify the names of all cluster nodes.
>>> Cluster Nodes <<< … Please list the names of all cluster nodes planned for the initial cluster configuration. You must enter at least two nodes. List one node name per line. When finished, type Control-D: Node name: node1 Node name: node2 Node name (Ctrl-D to finish): <Control-D> This is the complete list of nodes: … Is it correct (yes/no) [yes]? |
Specify whether to use Data Encryption Standard (DES) authentication.
DES authentication provides an additional level of security at installation time. DES authentication enables the sponsoring node to authenticate nodes that attempt to contact the sponsoring node to update the cluster configuration.
If you choose to use DES authentication for additional security, you must configure all necessary encryption keys before any node can join the cluster. See the keyserv(1M) and publickey(4) man pages for details.
>>> Authenticating Requests to Add Nodes <<< … Do you need to use DES authentication (yes/no) [no]? |
Specify the private network address and netmask.
>>> Network Address for the Cluster Transport <<< … Is it okay to accept the default network address (yes/no) [yes]? Is it okay to accept the default netmask (yes/no) [yes]? |
You cannot change the private network address after the cluster is successfully formed.
Specify whether the cluster uses transport junctions.
If this is a two-node cluster, specify whether you intend to use transport junctions.
>>> Point-to-Point Cables <<< … Does this two-node cluster use transport junctions (yes/no) [yes]? |
You can specify that the cluster uses transport junctions, regardless of whether the nodes are directly connected to each other. If you specify that the cluster uses transport junctions, you can more easily add new nodes to the cluster in the future.
If this cluster has three or more nodes, you must use transport junctions. Press Return to continue to the next screen.
>>> Point-to-Point Cables <<< … Since this is not a two-node cluster, you will be asked to configure two transport junctions. Hit ENTER to continue: |
Does this cluster use transport junctions?
If no, proceed to Step 18.
If yes, specify names for the transport junctions. You can use the default names switchN or create your own names.
>>> Cluster Transport Junctions <<< … What is the name of the first junction in the cluster [switch1]? What is the name of the second junction in the cluster [switch2]? |
Specify the first cluster-interconnect transport adapter of the first node.
>>> Cluster Transport Adapters and Cables <<< … For node "node1", What is the name of the first cluster transport adapter? adapter |
Specify the connection endpoint of the first adapter.
If the cluster does not use transport junctions, specify the name of the adapter on the second node to which this adapter connects.
… Name of adapter on "node2" to which "adapter" is connected? adapter |
If the cluster uses transport junctions, specify the name of the first transport junction and its port.
… For node "node1", Name of the junction to which "adapter" is connected? switch … For node "node1", Use the default port name for the "adapter" connection (yes/no) [yes]? |
If your configuration uses SCI–PCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) that is on the SCI Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the switch port name 0.
… Use the default port name for the "adapter" connection (yes/no) [yes]? n What is the name of the port you want to use? 0 |
Specify the second cluster-interconnect transport adapter of the first node.
… For node "node1", What is the name of the second cluster transport adapter? adapter |
Specify the connection endpoint of the second adapter.
If the cluster does not use transport junctions, specify the name of the adapter on the second node to which this adapter connects.
… Name of adapter on "node2" to which "adapter" is connected? adapter |
If the cluster uses transport junctions, specify the name of the second transport junction and its port.
… For node "node1", Name of the junction to which "adapter" is connected? switch … For node "node1", Use the default port name for the "adapter" connection (yes/no) [yes]? |
If your configuration uses SCI–PCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) that is on the SCI Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the switch port name 0.
… Use the default port name for the "adapter" connection (yes/no) [yes]? n What is the name of the port you want to use? 0 |
Does this cluster use transport junctions?
Specify the global-devices file-system name for each cluster node.
>>> Global Devices File System <<< … The default is to use /globaldevices. For node "node1", Is it okay to use this default (yes/no) [yes]? For node "node2", Is it okay to use this default (yes/no) [yes]? |
Confirm that the scinstall utility should install patches.
See “Patches and Required Firmware Levels” in Sun Cluster 3.1 Release Notes for the location of patches and installation instructions.
If you specify a patch directory for the scinstall command, then patches in Solaris patch directories, as specified in Step 29, are not installed.
>>> Software Patch Installation <<< … Do you want scinstall to install patches for you (yes/no) [yes]? y What is the name of the patch directory? /export/suncluster/sc31/patches Do you want scinstall to use a patch list file (yes/no) [no]? n … |
Accept or decline the generated scinstall commands.
The scinstall command that is generated from your input is displayed for confirmation.
>>> Confirmation <<< Your responses indicate the following options to scinstall: ----------------------------------------- For node "node1", scinstall -c jumpstart-dir -h node1 \ … Are these the options you want to use (yes/no) [yes]? ----------------------------------------- For node "node2", scinstall -c jumpstart-dir -h node2 \ … Are these the options you want to use (yes/no) [yes]? ----------------------------------------- Do you want to continue with JumpStart set up (yes/no) [yes]? |
If you do not accept the generated commands, the scinstall utility returns you to the Main Menu. You can then rerun menu option 3 and provide different answers. Your previous answers display as the defaults.
If necessary, make adjustments to the default class file, or profile, created by scinstall.
The scinstall command creates the following autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.1 directory.
install_type initial_install system_type standalone partitioning explicit filesys rootdisk.s0 free / filesys rootdisk.s1 750 swap filesys rootdisk.s3 512 /globaldevices filesys rootdisk.s7 20 cluster SUNWCuser add package SUNWman add |
The default class file installs the End User System Support software group (SUNWCuser) of Solaris software. If your configuration has additional Solaris software requirements, change the class file accordingly. See Solaris Software Group Considerations for more information.
You can change the profile in one of the following ways:
Edit the autoscinstall.class file directly. These changes are applied to all nodes in all clusters that use this custom JumpStart directory.
Update the rules file to point to other profiles, then run the check utility to validate the rules file.
If the Solaris operating-environment install profile meets minimum Sun Cluster file-system allocation requirements, no restrictions are placed on other changes to the install profile. See System Disk Partitions for partitioning guidelines and requirements to support Sun Cluster 3.1 software. For more information about JumpStart profiles, see “Preparing Custom JumpStart Installations” in Solaris 8 Advanced Installation Guide or “Preparing Custom JumpStart Installations (Tasks)” in Solaris 9 Installation Guide.
Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?
If no, proceed to Step 28.
If yes and you install the End User System Support software group, add the following entries to the default class file as described in Step 26.
package SUNWrsm add package SUNWrsmx add package SUNWrsmo add package SUNWrsmox add |
In addition, you must create or modify a postinstallation finish script at Step 32 to install the Sun Cluster packages to support the RSMAPI and SCI-PCI adapters.
If you install a higher software group than End User System Support, the RSMAPI software packages are automatically installed with the Solaris software. You then do not need to add the packages to the class file.
Do you intend to use SunPlex Manager?
If no, proceed to Step 29.
If yes and you install the End User System Support software group, add the following entries to the default class file as described in Step 26.
package SUNWapchr add package SUNWapchu add |
These Apache software packages are required for SunPlex Manager. However, if you install a higher software group than End User System Support, the Apache software packages are installed with the Solaris software. You then do not need to add the packages to the class file.
Set up Solaris patch directories.
If you specify a patch directory for the scinstall command in Step 24, patches in Solaris patch directories are not installed.
Create jumpstart-dir/autoscinstall.d/nodes/node/patches directories on the JumpStart installation server.
Create one directory for each node in the cluster, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared patch directory.
# mkdir jumpstart-dir/autoscinstall.d/nodes/node/patches |
Place copies of any Solaris patches into each of these directories.
Also place copies of any hardware-related patches that must be installed after Solaris software is installed into each of these directories.
Set up files to contain the necessary hostname information locally on each node.
On the JumpStart installation server, create files that are named jumpstart-dir/autoscinstall.d/nodes/node/archive/etc/inet/hosts.
Create one file for each node, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared hosts file.
Add the following entries into each file.
IP address and hostname of the NFS server that holds a copy of the Sun Cluster CD-ROM image. The NFS server could be the JumpStart installation server or another machine.
IP address and hostname of each node in the cluster.
Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport?
If no, proceed to Step 32 if you intend to add your own postinstallation finish script. Otherwise, skip to Step 33.
If yes, follow instructions in Step 32 to set up a postinstallation finish script to install the following additional packages. Install the appropriate packages from the /cdrom/suncluster_3_1_u1/SunCluster_3.1/Sol_ver/Packages directory of the Sun Cluster 3.1 10/03 CD-ROM in the order that is given in the following table.
In the CD-ROM path, replace ver with 8 (for Solaris 8) or 9 (for Solaris 9).
Feature |
Additional Sun Cluster 3.1 10/03 Packages to Install |
---|---|
RSMAPI |
SUNWscrif |
SCI-PCI adapters |
SUNWsci SUNWscid SUNWscidx |
(Optional) Add your own postinstallation finish script.
If you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use SCI-PCI adapters for the interconnect transport, you must modify the finish script to install the Sun Cluster SUNWscrif software package. This package is not automatically installed by scinstall.
You can add your own finish script, which is run after the standard finish script installed by the scinstall command. See “Preparing Custom JumpStart Installations” in Solaris 8 Advanced Installation Guide or “Preparing Custom JumpStart Installations (Tasks)” in Solaris 9 Installation Guide for information about creating a JumpStart finish script.
If you are using a cluster administrative console, display a console screen for each node in the cluster.
If Cluster Control Panel (CCP) software is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time. Use the following command to start cconsole:
# /opt/SUNWcluster/bin/cconsole clustername & |
If you do not use the cconsole utility, connect to the consoles of each node individually.
From the ok PROM prompt on the console of each node, type the boot net - install command to begin the network JumpStart installation of each node.
ok boot net - install |
Surround the dash (-) in the command with a space on each side.
Sun Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. The default file is shipped with references to the maximum number of nodes. Therefore, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See How to Configure Network Time Protocol (NTP) for information on how to suppress these messages under otherwise normal cluster conditions.
When the installation is successfully completed, each node is fully installed as a new cluster node.
Are you installing a new node to an existing cluster?
If no, proceed to Step 36.
If yes, create mount points on the new node for all existing cluster file systems.
From another cluster node that is active, display the names of all cluster file systems.
% mount | grep global | egrep -v node@ | awk '{print $1}' |
On the node that you added to the cluster, create a mount point for each cluster file system in the cluster.
% mkdir -p mountpoint |
For example, if a file-system name that is returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node that is being added to the cluster.
The mount points become active after you reboot the cluster in Step 37.
Is VERITAS Volume Manager (VxVM) installed on any nodes that are already in the cluster?
If no, proceed to Step 36.
If yes, ensure that the same vxio number is used on the VxVM-installed nodes. Also ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.
# grep vxio /etc/name_to_major vxio NNN |
If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node. Change the /etc/name_to_major entry to use a different number.
Do you intend to use dynamic reconfiguration on Sun Enterprise 10000 servers?
If no, proceed to Step 37.
If yes, on each node add the following entry to the /etc/system file.
set kernel_cage_enable=1 |
This entry becomes effective after the next system reboot. See the Sun Cluster 3.1 10/03 System Administration Guide for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.
Did you add a new node to an existing cluster or install Sun Cluster software patches that require you to reboot the entire cluster, or both?
If no, reboot the individual node if any patches that you installed require a node reboot. Also reboot if any other changes that you made require a reboot to become active, then proceed to Step 38.
If yes, perform a reconfiguration reboot of the cluster as instructed in the following steps.
From one node, shut down the cluster.
# scshutdown |
Do not reboot the first-installed node of the cluster until after the cluster is shut down.
Reboot each node in the cluster.
ok boot |
Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down. Cluster nodes remain in installation mode until the first time you run the scsetup(1M) command, during the procedure How to Perform Postinstallation Setup.
Set up the name-service look-up order.