If you do not use the scinstall custom JumpStart installation method to install software, perform this procedure to install the Solaris OS on each node in the global cluster. See How to Install Solaris and Sun Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.
To speed installation, you can install the Solaris OS on each node at the same time.
If your nodes are already installed with the Solaris OS but do not meet Sun Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Sun Cluster software. See Planning the Solaris OS for information about required root-disk partitioning and other Sun Cluster installation requirements.
Perform the following tasks:
Ensure that the hardware setup is complete and that connections are verified before you install Solaris software. See the Sun Cluster Hardware Administration Collection and your server and storage device documentation for details.
Ensure that your cluster configuration planning is complete. See How to Prepare for Cluster Software Installation for requirements and guidelines.
Complete the Local File System Layout Worksheet.
If you use a naming service, add address-to-name mappings for all public hostnames and logical addresses to any naming services that clients use for access to cluster services. See Public-Network IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.
If you are using a cluster administrative console, display a console screen for each node in the cluster.
If Cluster Control Panel (CCP) software is installed and configured on your administrative console, use the cconsole(1M) utility to display the individual console screens.
As superuser, use the following command to start the cconsole utility:
adminconsole# /opt/SUNWcluster/bin/cconsole clustername & |
The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.
If you do not use the cconsole utility, connect to the consoles of each node individually.
Install the Solaris OS as instructed in your Solaris installation documentation.
You must install all nodes in a cluster with the same version of the Solaris OS.
You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:
Install at least the End User Solaris Software Group.
To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.
See Solaris Software Group Considerations for information about additional Solaris software requirements.
Choose Manual Layout to set up the file systems.
Create a file system of at least 512 Mbytes for use by the global-device subsystem.
Sun Cluster software requires a global-devices file system for installation to succeed.
Specify that slice 7 is at least 20 Mbytes in size.
Create any other file-system partitions that you need, as described in System Disk Partitions.
For ease of administration, set the same root password on each node.
If you will use role-based access control (RBAC) instead of superuser to access the cluster nodes, set up an RBAC role that provides authorization for all Sun Cluster commands.
This series of installation procedures requires the following Sun Cluster RBAC authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Sun Cluster man pages for the RBAC authorization that each Sun Cluster subcommand requires.
If you are adding a node to an existing cluster, add mount points for cluster file systems to the new node.
From the active cluster node, display the names of all cluster file systems.
phys-schost-1# mount | grep global | egrep -v node@ | awk '{print $1}' |
On the new node, create a mount point for each cluster file system in the cluster.
phys-schost-new# mkdir -p mountpoint |
For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.
If you are adding a node and VxVM is installed on any node in the cluster, perform the following tasks.
Ensure that the same vxio number is used on the VxVM-installed nodes.
phys-schost# grep vxio /etc/name_to_major vxio NNN |
Ensure that the vxio number is available for use on each of the nodes that do not have VxVM installed.
If the vxio number is already in use on a node that does not have VxVM installed, change the /etc/name_to_major entry to use a different number.
If you installed the End User Solaris Software Group and you want to use any of the following Sun Cluster features, install additional Solaris software packages to support these features.
Remote Shared Memory Application Programming Interface (RSMAPI)
RSMRDT drivers
SPARC: SCI-PCI adapters
SPARC: For the Solaris 9 OS, use the following command:
phys-schost# pkgadd -d . SUNWrsm SUNWrsmc SUNWrsmo SUNWrsmox |
For the Solaris 10 OS, use the following command:
phys-schost# pkgadd -G -d . SUNWrsm SUNWrsmo |
You must add these packages only to the global zone. The -G option adds packages to the current zone only. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.
Install any required Solaris OS patches and hardware-related firmware and patches, including those for storage-array support. Also download any needed firmware that is contained in the hardware patches.
See Patches and Required Firmware Levels in Sun Cluster Release Notes for the location of patches and installation instructions.
x86: Set the default boot file.
The setting of this value enables you to reboot the node if you are unable to access a login prompt.
Update the /etc/inet/hosts or /etc/inet/ipnodes file on each node with all public IP addresses that are used in the cluster.
Perform this step regardless of whether you are using a naming service. The ipnodes file can contain both IPv4 and IPv6 addresses. See Public-Network IP Addresses for a listing of Sun Cluster components whose IP addresses you must add.
During establishment of a new cluster or new cluster node, the scinstall utility automatically adds the public IP address of each node that is being configured to the /etc/inet/hosts file. Adding these IP addresses to the /etc/inet/ipnodes file is optional.
If you will use ce adapters for the cluster interconnect, add the following entry to the /etc/system file.
set ce:ce_taskq_disable=1 |
This entry becomes effective after the next system reboot.
(Optional) On Sun Enterprise 10000 servers, configure the /etc/system file to use dynamic reconfiguration.
Add the following entry to the /etc/system file on each node of the cluster:
set kernel_cage_enable=1 |
This entry becomes effective after the next system reboot. See your server documentation for more information about dynamic reconfiguration.
(Optional) Configure public-network adapters in IPMP groups.
If you do not want to use the multiple-adapter IPMP groups that the scinstall utility configures during cluster creation, configure custom IPMP groups as you would in a stand-alone system. See Chapter 31, Administering IPMP (Tasks), in System Administration Guide: IP Services for details.
During cluster creation, the scinstall utility configures each set of public-network adapters that use the same subnet and are not already configured in an IPMP group into a single multiple-adapter IPMP group. The scinstall utility ignores any existing IPMP groups.
If your server supports the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.
Otherwise, to use Solaris I/O multipathing software, go to How to Install Solaris I/O Multipathing Software.
Otherwise, to install VxFS, go to SPARC: How to Install Veritas File System Software.
Otherwise, install the Sun Cluster software packages. Go to How to Install Sun Cluster Framework and Data-Service Software Packages.
See the Sun Cluster System Administration Guide for Solaris OS for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration.