This chapter provides new installation information that has been added to the Sun Cluster 3.0 5/02 update release. This information supplements the Sun Cluster 3.0 12/01 Software Installation Guide. For new data services installation information, see Chapter 5, Data Services.
This chapter contains new information for the following topics.
The following information applies to this update release and all subsequent updates.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
For Solaris 9, create a 20-Mbyte partition for volume manager use on a slice at the end of the disk (slice 7) to accommodate use by Solaris Volume Manager software.
The following information applies to this update release and all subsequent updates.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
VxFS mount requirement - Globally mount and unmount a VxFS file system from the primary node (the node that masters the disk on which the VxFS file system resides) to ensure that the operation succeeds. A VxFS file system mount or unmount operation that is performed from a secondary node might fail.
The following corrections were introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
/kernel/drv/md.conf settings -
nmd - The maximum number of metadevices allowed per Solstice DiskSuite diskset is 1024. The maximum number of metadevices allowed per Solaris Volume Manager diskset is 8192.
md_nsets - The maximum number of disksets allowed per cluster is 31, not including the one for private disk management.
The following information applies to this update release and all subsequent updates.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
CD-ROM path - Change all occurrences of the CD-ROM path to /cdrom/suncluster_3_0_u3. This applies to the Sun Cluster 3.0 5/02 CD-ROM for both Solaris 8 and Solaris 9.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
If you do not use the scinstall(1M) custom JumpStartTM installation method to install software, perform this task to install the Solaris operating environment on each node in the cluster.
If your nodes are already installed with the Solaris operating environment, you must still reinstall the Solaris software as described in this procedure to ensure successful installation of Sun Cluster software.
Ensure that the hardware setup is complete and connections are verified before you install Solaris software.
See the Sun Cluster 3.0 12/01 Hardware Guide and your server and storage device documentation for details.
Ensure that your cluster configuration planning is complete.
See "How to Prepare for Cluster Software Installation" in the Sun Cluster 3.0 12/01 Software Installation Guide for requirements and guidelines.
Have available your completed "Local File System Layout Worksheet" from the Sun Cluster 3.0 Release Notes.
Are you using a naming service?
If no, go to Step 5. You will set up local hostname information in Step 16.
If yes, add address-to-name mappings for all public hostnames and logical addresses to any naming services (such as NIS or DNS) used by clients for access to cluster services. See "IP Addresses" in the Sun Cluster 3.0 12/01 Software Installation Guide for planning guidelines. See your Solaris system administrator documentation for information about using Solaris naming services.
If you are using a cluster administrative console, display a console screen for each node in the cluster.
If Cluster Control Panel (CCP) is installed and configured on your administrative console, you can use the cconsole(1M) utility to display the individual console screens. CCP also opens a master window from which you can send your input to all individual console windows at the same time.
If you do not use CCP, connect to the consoles of each node individually.
To save time, you can install the Solaris operating environment on each node at the same time.
On each node of the cluster, determine whether the local-mac-address variable is correctly set to false.
Sun Cluster software does not support the local-mac-address variable set to true.
Display the value of the local-mac-address variable.
If the node is preinstalled with Solaris software, as superuser run the following command.
# /usr/sbin/eeprom local-mac-address? |
If the node is not yet installed with Solaris software, run the following command from the ok prompt.
ok printenv local-mac-address? |
Does the command return local-mac-address?=false on each node?
If yes, the variable settings are correct. Go to Step 7.
If no, change the variable setting on any node that is not set to false.
If the node is preinstalled with Solaris software, as superuser run the following command.
# /usr/sbin/eeprom local-mac-address?=false |
If the node is not yet installed with Solaris software, run the following command from the ok prompt.
ok setenv local-mac-address? false |
Repeat Step a to verify any changes you made in Step b.
The new setting becomes effective at the next system reboot.
Install the Solaris operating environment as instructed in the Solaris installation documentation.
You must install all nodes in a cluster with the same version of the Solaris operating environment.
You can use any method normally used to install the Solaris operating environment to install the software on new nodes to be installed into a clustered environment. These methods include the Solaris interactive installation program, Solaris JumpStart, and Solaris Web Start.
During Solaris software installation, do the following.
Install at least the End User System Support software group.
If you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use PCI-SCI adapters for the interconnect transport, the required RSMAPI software packages (SUNWrsm, SUNWrsmx, SUNWrsmo, and SUNWrsmox) are included with the higher-level software groups. If you install the End User System Support software group, you must install the SUNWrsm* packages manually from the Solaris CD-ROM at Step 12.
If you intend to use SunPlex Manager, the required Apache software packages (SUNWapchr and SUNWapchu) are included with the higher-level software groups. If you install the End User System Support software group, you must install the SUNWapch* packages manually from the Solaris CD-ROM at Step 13.
See "Solaris Software Group Considerations" in the Sun Cluster 3.0 12/01 Software Installation Guide for information about additional Solaris software requirements.
Choose Manual Layout to set up the file systems.
Create a file system of at least 100 MBytes for use by the global-devices subsystem. If you intend to use SunPlex Manager to install Sun Cluster software, you must create the file system with a mount point of /globaldevices. This mount point is the default used by scinstall.
A global-devices file system is required for Sun Cluster software installation to succeed.
If you intend to use SunPlex Manager to install Solstice DiskSuite software (Solaris 8), configure Solaris Volume Manager software (Solaris 9), or install Sun Cluster HA for NFS or Sun Cluster HA for Apache in addition to installing Sun Cluster software, create a file system on slice 7 with a mount point of /sds. For Solstice DiskSuite, make the slice at least 10 Mbytes. For Solaris Volume Manager, make the slice at least 20 Mbytes. Otherwise, create any file system partitions needed to support your volume manager software as described in "System Disk Partitions" in the Sun Cluster 3.0 12/01 Software Installation Guide.
Choose auto reboot.
Solaris software is installed and the node reboots before the next prompts display.
For ease of administration, set the same root password on each node.
Answer no when asked whether to enable automatic power-saving shutdown.
You must disable automatic shutdown in Sun Cluster configurations. See the pmconfig(1M) and power.conf(4) man pages for more information.
The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be enabled. See the ifconfig(1M) man page for more information about Solaris interface groups.
Are you installing a new node to an existing cluster?
Have you added the new node to the cluster's authorized-node list?
If yes, go to Step 10.
If no, run scsetup(1M) from another, active cluster node to add the new node's name to the list of authorized cluster nodes. See "How to Add a Cluster Node to the Authorized Node List" in the Sun Cluster 3.0 12/01 System Administration Guide for procedures.
Create a mount point on the new node for each cluster file system in the cluster.
From another, active node of the cluster, display the names of all cluster file systems.
% mount | grep global | egrep -v node@ | awk `{print $1}' |
On the new node, create a mount point for each cluster file system in the cluster.
% mkdir -p mountpoint |
For example, if the mount command returned the file system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.
Is VERITAS Volume Manager (VxVM) installed on any nodes that are already in the cluster?
If yes, ensure that the same vxio number is used on the VxVM-installed nodes and that the vxio number is available for use on each of the nodes that do not have VxVM installed.
# grep vxio /etc/name_to_major vxio NNN |
If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node by changing the /etc/name_to_major entry to use a different number.
If no, go to Step 12.
Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use PCI-SCI adapters for the interconnect transport?
If yes and you installed the End User System Support software group, install the SUNWrsm* packages from the Solaris CD-ROM.
# pkgadd -d . SUNWrsm SUNWrsmx SUNWrsmo SUNWrsmox |
If no, or if you installed a higher-level software group, go to Step 13.
Do you intend to use SunPlex Manager?
If yes and you installed the End User System Support software group, install the SUNWapch* packages from the Solaris CD-ROM.
# pkgadd -d . SUNWapchr SUNWapchu |
If no, or if you installed a higher-level software group, go to Step 14.
Apache software packages must already be installed before SunPlex Manager is installed.
Install any Solaris software patches.
See the Sun Cluster 3.0 5/02 Release Notes for the location of patches and installation instructions. If necessary, view the /etc/release file to see the exact version of Solaris software that is installed on a node.
Install any hardware-related patches and download any needed firmware contained in the hardware patches.
See the Sun Cluster 3.0 5/02 Release Notes for the location of patches and installation instructions.
Update the /etc/inet/hosts file on each node with all public hostnames and logical addresses for the cluster.
Perform this step regardless of whether you are using a naming service.
Do you intend to use dynamic reconfiguration?
To use dynamic reconfiguration in your cluster configuration, the servers must be supported to use dynamic reconfiguration with Sun Cluster software.
If yes, on each node add the following entry to the /etc/system file.
set kernel_cage_enable=1 |
This entry becomes effective after the next system reboot. See the Sun Cluster 3.0 12/01 System Administration Guide for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.
If no, go to Step 18.
Install Sun Cluster software on your cluster nodes.
To use SunPlex Manager, go to "Using SunPlex Manager to Install Sun Cluster Software" in the Sun Cluster 3.0 12/01 Software Installation Guide.
To use scinstall, go to "How to Install Sun Cluster Software on the First Cluster Node (scinstall) (5/02)".
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
After Step 1 - Perform the following step after Step 1 as the new Step 2. The original Step 2 becomes Step 3.
Do you intend to use SunPlex Manager?
If yes, ensure that the Apache software packages are installed on the node. If you installed the Solaris End User System Support software group, install the SUNWapch* packages from the Solaris CD-ROM.
# pkgadd -d . SUNWapchr SUNWapchu |
The Apache software packages are automatically installed if you installed a higher-level Solaris software group.
If no, go to Step 3.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
After Step 2 - Perform the following step after Step 2 as the new Step 3. The original Step 3 becomes Step 4.
Do you intend to use SunPlex Manager?
If yes, ensure that the Apache software packages are installed on the node. If you installed the Solaris End User System Support software group, install the SUNWapch* packages from the Solaris CD-ROM.
# pkgadd -d . SUNWapchr SUNWapchu |
The Apache software packages are automatically installed if you installed a higher-level Solaris software group.
If no, go to Step 4.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Solaris Volume Manager - For Solaris 9, Solaris Volume Manager software is already installed as part of Solaris software installation. You can use SunPlex Manager to configure up to three metasets and associated metadevices, and to create and mount cluster file systems for each.
To use SunPlex Manager to install the Sun Cluster HA for NFS data service or the Sun Cluster HA for Apache scalable data service, you must also use SunPlex Manager to configure Solaris Volume Manager mirrored disksets.
The /sds partition required by SunPlex Manager must be at least 20 Mbytes to support Solaris Volume Manager.
Metaset names - The names of two of the three metaset names that SunPlex Manager creates have been changed.
The stripe-1 metaset is now named mirror-2.
The concat-1 metaset is now named mirror-3.
The following table lists each metaset name and cluster file system mount point created by SunPlex Manager, depending on the number of shared disks connected to the node.
Table 4-1 Metasets Installed by SunPlex Manager
Shared Disks |
Metaset Name |
Cluster File System Mount Point |
Purpose |
---|---|---|---|
First pair of shared disks |
mirror-1 |
/global/mirror-1 |
Sun Cluster HA for NFS or Sun Cluster HA for Apache scalable data service, or both |
Second pair of shared disks |
mirror-2 |
/global/mirror-2 |
unused |
Third pair of shared disks |
mirror-3 |
/global/mirror-3 |
unused |
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Step 3 - To install Apache software packages from the Solaris 9 Software CD-ROM, change to the /cdrom/cdrom0/Solaris_9/Product directory.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Solaris and Sun Cluster software on all cluster nodes in a single operation and establish the cluster. You can also use this procedure to add new nodes to an existing cluster.
Ensure that the hardware setup is complete and connections are verified before you install Solaris software.
See the Sun Cluster 3.0 12/01 Hardware Guide and your server and storage device documentation for details on how to set up the hardware.
Ensure that your cluster configuration planning is complete.
See "How to Prepare for Cluster Software Installation" in the Sun Cluster 3.0 12/01 Software Installation Guide for requirements and guidelines.
Have available the following information.
The Ethernet address of each cluster node
The following completed configuration planning worksheets from the Sun Cluster 3.0 5/02 Release Notes.
"Local File System Layout Worksheet"
"Cluster and Node Names Worksheet"
"Cluster Interconnect Worksheet"
See "Planning the Solaris Operating Environment" and "Planning the Sun Cluster Environment" in the Sun Cluster 3.0 12/01 Software Installation Guide for planning guidelines.
Are you using a naming service?
If no, go to Step 5. You will set up the necessary hostname information in Step 31.
If yes, add address-to-name mappings for all public hostnames and logical addresses, as well as the IP address and hostname of the JumpStart server, to any naming services (such as NIS or DNS) used by clients for access to cluster services. See "IP Addresses" in the Sun Cluster 3.0 12/01 Software Installation Guide for planning guidelines. See your Solaris system administrator documentation for information about using Solaris naming services.
Are you installing a new node to an existing cluster?
If yes, run scsetup(1M) from another, active cluster node to add the new node's name to the list of authorized cluster nodes. See "How to Add a Cluster Node to the Authorized Node List" in the Sun Cluster 3.0 12/01 System Administration Guide for procedures.
If no, go to Step 6.
As superuser, set up the JumpStart install server for Solaris operating environment installation.
See the setup_install_server(1M) and add_install_client(1M) man pages and the Solaris Advanced Installation Guide for instructions on how to set up a JumpStart install server.
When you set up the install server, ensure that the following requirements are met.
The install server is on the same subnet as the cluster nodes, but is not itself a cluster node.
The install server installs the release of the Solaris operating environment required by the Sun Cluster software.
A custom JumpStart directory exists for JumpStart installation of Sun Cluster. This jumpstart-dir directory must contain a copy of the check(1M) utility and be NFS exported for reading by the JumpStart install server.
Each new cluster node is configured as a custom JumpStart install client that uses the custom JumpStart directory set up for Sun Cluster installation.
Create a directory on the JumpStart install server to hold your copy of the Sun Cluster 3.0 5/02 CD-ROM, if one does not already exist.
In the following example, the /export/suncluster directory is created for this purpose.
# mkdir -m 755 /export/suncluster |
Copy the Sun Cluster CD-ROM to the JumpStart install server.
Insert the Sun Cluster 3.0 5/02 CD-ROM into the CD-ROM drive on the JumpStart install server.
If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/suncluster_3_0_u3 directory.
Change to the /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools directory.
# cd /cdrom/suncluster_3_0_u3/SunCluster_3.0/Tools |
Copy the CD-ROM to a new directory on the JumpStart install server.
The scinstall command creates the new installation directory as it copies the CD-ROM files. The installation directory name /export/suncluster/sc30 is used here as an example.
# ./scinstall -a /export/suncluster/sc30 |
Eject the CD-ROM.
# cd / # eject cdrom |
Ensure that the Sun Cluster 3.0 5/02 CD-ROM image on the JumpStart install server is NFS exported for reading by the JumpStart install server.
See the NFS Administration Guide and the share(1M) and dfstab(4) man pages for more information about automatic file sharing.
Are you installing a new node to an existing cluster?
Have you added the node to the cluster's authorized-node list?
If yes, go to Step 11.
If no, run scsetup(1M) from any existing cluster node to add the new node's name to the list of authorized cluster nodes. See "How to Add a Cluster Node to the Authorized Node List" in the Sun Cluster 3.0 12/01 System Administration Guide for procedures.
From the JumpStart install server, start the scinstall(1M) utility.
The path /export/suncluster/sc30 is used here as an example of the installation directory you created.
# cd /export/suncluster/sc30/SunCluster_3.0/Tools # ./scinstall |
Follow these guidelines to use the interactive scinstall utility.
Interactive scinstall enables you to type ahead. Therefore, do not press Return more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu. If you press Control-D to abort the session after Sun Cluster software is installed, scinstall asks you whether you want it to de-install those packages.
Your session answers are stored as defaults for the next time you run this menu option. Default answers display between brackets ([ ]) at the end of the prompt.
From the Main Menu, type 3 (Configure a cluster to be JumpStarted from this install server).
This option is used to configure customer JumpStart finish scripts. JumpStart uses these finish scripts to install the Sun Cluster software.
*** Main Menu *** Please select from one of the following (*) options: 1) Establish a new cluster using this machine as the first node 2) Add this machine as a node in an established cluster * 3) Configure a cluster to be JumpStarted from this install server 4) Add support for new data services to this cluster node 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 3 *** Custom JumpStart *** ... Do you want to continue (yes/no) [yes]? |
If option 3 does not have an asterisk in front, the option is disabled because JumpStart setup is not complete or has an error. Exit the scinstall utility, repeat Step 6 through Step 8 to correct JumpStart setup, then restart the scinstall utility.
Specify the JumpStart directory name.
>>> Custom JumpStart Directory <<< .... What is your JumpStart directory name? jumpstart-dir |
Specify the name of the cluster.
>>> Cluster Name <<< ... What is the name of the cluster you want to establish? clustername |
Specify the names of all cluster nodes.
>>> Cluster Nodes <<< ... Please list the names of all cluster nodes planned for the initial cluster configuration. You must enter at least two nodes. List one node name per line. When finished, type Control-D: Node name: node1 Node name: node2 Node name (Ctrl-D to finish): <Control-D> This is the complete list of nodes: ... Is it correct (yes/no) [yes]? |
Specify whether to use data encryption standard (DES) authentication.
By default, Sun Cluster software permits a node to connect to the cluster only if the node is physically connected to the private interconnect and if the node name was specified in Step 15. However, the node actually communicates with the sponsoring node over the public network, since the private interconnect is not yet fully configured. DES authentication provides an additional level of security at installation time by enabling the sponsoring node to more reliably authenticate nodes that attempt to contact it to update the cluster configuration.
If you choose to use DES authentication for additional security, you must configure all necessary encryption keys before any node can join the cluster. See the keyserv(1M) and publickey(4) man pages for details.
>>> Authenticating Requests to Add Nodes <<< ... Do you need to use DES authentication (yes/no) [no]? |
Specify the private network address and netmask.
>>> Network Address for the Cluster Transport <<< ... Is it okay to accept the default network address (yes/no) [yes]? Is it okay to accept the default netmask (yes/no) [yes]? |
You cannot change the private network address after the cluster is successfully formed.
Specify whether the cluster uses transport junctions.
If this is a two-node cluster, specify whether you intend to use transport junctions.
>>> Point-to-Point Cables <<< ... Does this two-node cluster use transport junctions (yes/no) [yes]? |
You can specify that the cluster uses transport junctions, regardless of whether the nodes are directly connected to each other. If you specify that the cluster uses transport junctions, you can more easily add new nodes to the cluster in the future.
If this cluster has three or more nodes, you must use transport junctions. Press Return to continue to the next screen.
>>> Point-to-Point Cables <<< ... Since this is not a two-node cluster, you will be asked to configure two transport junctions. Hit ENTER to continue: |
Does this cluster use transport junctions?
If yes, specify names for the transport junctions. You can use the default names switchN or create your own names.
>>> Cluster Transport Junctions <<< ... What is the name of the first junction in the cluster [switch1]? What is the name of the second junction in the cluster [switch2]? |
If no, go to Step 20.
Specify the first cluster interconnect transport adapter of the first node.
>>> Cluster Transport Adapters and Cables <<< ... For node "node1", What is the name of the first cluster transport adapter? adapter |
Specify the connection endpoint of the first adapter.
If the cluster does not use transport junctions, specify the name of the adapter on the second node to which this adapter connects.
... Name of adapter on "node2" to which "adapter" is connected? adapter |
If the cluster uses transport junctions, specify the name of the first transport junction and its port.
... For node "node1", Name of the junction to which "adapter" is connected? switch ... For node "node1", Use the default port name for the "adapter" connection (yes/no) [yes]? |
If your configuration uses SCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) found on the Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the Dolphin switch port name 0.
... Use the default port name for the "adapter" connection (yes/no) [yes]? no What is the name of the port you want to use? 0 |
Specify the second cluster interconnect transport adapter of the first node.
... For node "node1", What is the name of the second cluster transport adapter? adapter |
Specify the connection endpoint of the second adapter.
If the cluster does not use transport junctions, specify the name of the adapter on the second node to which this adapter connects.
... Name of adapter on "node2" to which "adapter" is connected? adapter |
If the cluster uses transport junctions, specify the name of the second transport junction and its port.
... For node "node1", Name of the junction to which "adapter" is connected? switch ... For node "node1", Use the default port name for the "adapter" connection (yes/no) [yes]? |
If your configuration uses SCI adapters, do not accept the default when you are prompted for the adapter connection (the port name). Instead, provide the port name (0, 1, 2, or 3) found on the Dolphin switch itself, to which the node is physically cabled. The following example shows the prompts and responses for declining the default port name and specifying the Dolphin switch port name 0.
... Use the default port name for the "adapter" connection (yes/no) [yes]? no What is the name of the port you want to use? 0 |
Does this cluster use transport junctions?
Specify the global devices file system name for each cluster node.
>>> Global Devices File System <<< ... The default is to use /globaldevices. For node "node1", Is it okay to use this default (yes/no) [yes]? For node "node2", Is it okay to use this default (yes/no) [yes]? |
Accept or decline the generated scinstall commands.
The scinstall command generated from your input is displayed for confirmation.
>>> Confirmation <<< Your responses indicate the following options to scinstall: ----------------------------------------- For node "node1", scinstall -c jumpstart-dir -h node1 \ ... Are these the options you want to use (yes/no) [yes]? ----------------------------------------- For node "node2", scinstall -c jumpstart-dir -h node2 \ ... Are these the options you want to use (yes/no) [yes]? ----------------------------------------- Do you want to continue with JumpStart set up (yes/no) [yes]? |
If you do not accept the generated commands, the scinstall utility returns you to the Main Menu. From there you can rerun menu option 3 and provide different answers. Your previous answers display as the defaults.
If necessary, make adjustments to the default class file, or profile, created by scinstall.
The scinstall command creates the following autoscinstall.class default class file in the jumpstart-dir/autoscinstall.d/3.0 directory.
install_type initial_install system_type standalone partitioning explicit filesys rootdisk.s0 free / filesys rootdisk.s1 750 swap filesys rootdisk.s3 100 /globaldevices filesys rootdisk.s7 10 cluster SUNWCuser add package SUNWman add |
The default class file installs the End User System Support software group (SUNWCuser) of Solaris software. If your configuration has additional Solaris software requirements, change the class file accordingly. See "Solaris Software Group Considerations" in the Sun Cluster 3.0 12/01 Software Installation Guide for more information.
You can change the profile in one of the following ways.
Edit the autoscinstall.class file directly. These changes are applied to all nodes in all clusters that use this custom JumpStart directory.
Update the rules file to point to other profiles, then run the check utility to validate the rules file.
As long as the Solaris operating environment install profile meets minimum Sun Cluster file system allocation requirements, there are no restrictions on other changes to the install profile. See "System Disk Partitions" in the Sun Cluster 3.0 12/01 Software Installation Guide for partitioning guidelines and requirements to support Sun Cluster 3.0 software. For more information about JumpStart profiles, see the Solaris 8 Advanced Installation Guide or the Solaris 9Advanced Installation Guide.
Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use PCI-SCI adapters for the interconnect transport?
If yes and you install the End User System Support software group, add the following entries to the default class file as described in Step 27.
package SUNWrsm add package SUNWrsmx add package SUNWrsmo add package SUNWrsmox add |
In addition, you must create or modify a post-installation finish script at Step 33 to install the Sun Cluster packages to support the RSMAPI and PCI-SCI adapters.
If you install a higher software group than End User System Support, the SUNWrsm* packages are installed with the Solaris software and do not need to be added to the class file.
If no, go to Step 29.
Do you intend to use SunPlex Manager?
If yes and you install the End User System Support software group, add the following entries to the default class file as described in Step 27.
package SUNWapchr add package SUNWapchu add |
If you install a higher software group than End User System Support, the SUNWrsm* packages are installed with the Solaris software and do not need to be added to the class file.
If no, go to Step 30.
Set up Solaris patch directories.
Create jumpstart-dir/autoscinstall.d/nodes/node/patches directories on the JumpStart install server.
Create one directory for each node in the cluster, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared patch directory.
# mkdir jumpstart-dir/autoscinstall.d/nodes/node/patches |
Place copies of any Solaris patches into each of these directories.
Also place copies of any hardware-related patches that must be installed after Solaris software is installed into each of these directories.
Set up files to contain the necessary hostname information locally on each node.
On the JumpStart install server, create files named jumpstart-dir/autoscinstall.d/nodes/node/archive/etc/inet/hosts.
Create one file for each node, where node is the name of a cluster node. Alternately, use this naming convention to create symbolic links to a shared hosts file.
Add the following entries into each file.
IP address and hostname of the NFS server that holds a copy of the Sun Cluster CD-ROM image. This could be the JumpStart install server or another machine.
IP address and hostname of each node in the cluster.
Do you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use PCI-SCI adapters for the interconnect transport?
If yes, follow instructions in Step 33 to set up a post-installation finish script to install the following additional packages. Install the appropriate packages from the /cdrom/suncluster_3_0_u3/SunCluster_3.0/Packages directory of the Sun Cluster 3.0 5/02 CD-ROM in the order given in the following table.
Table 4-2 Sun Cluster 3.0 Packages to Support the RSMAPI and PCI-SCI Adapters
Feature |
Additional Sun Cluster 3.0 Packages to Install |
---|---|
RSMAPI |
SUNWscrif |
PCI-SCI adapters |
SUNWsci SUNWscid SUNWscidx |
If no, go to Step 33 if you intend to add your own post-installation finish script. Otherwise, skip to Step 34.
(Optional) Add your own post-installation finish script.
If you intend to use the Remote Shared Memory Application Programming Interface (RSMAPI) or use PCI-SCI adapters for the interconnect transport, you must modify the finish script into install the Sun Cluster SUNWscrif software package. This package is not automatically installed by scinstall.
You can add your own finish script, which is run after the standard finish script installed by the scinstall command. See the Solaris 8 Advanced Installation Guide or the Solaris 9 Advanced Installation Guide for information about creating a JumpStart finish script.
If you use an administrative console, display a console screen for each node in the cluster.
If cconsole(1M) is installed and configured on your administrative console, you can use it to display the individual console screens. Otherwise, you must connect to the consoles of each node individually.
From the ok PROM prompt on the console of each node, type the boot net - install command to begin the network JumpStart installation of each node.
ok boot net - install |
The dash (-) in the command must be surrounded by a space on each side.
Sun Cluster installation output is logged in the /var/cluster/logs/install/scinstall.log.pid file, where pid is the process ID number of the scinstall instance.
Unless you have installed your own /etc/inet/ntp.conf file, the scinstall command installs a default ntp.conf file for you. Because the default file is shipped with references to eight nodes, the xntpd(1M) daemon might issue error messages regarding some of these references at boot time. You can safely ignore these messages. See "How to Update Network Time Protocol (NTP) (5/02)" for information on how to suppress these messages under otherwise normal cluster conditions.
When the installation is successfully completed, each node is fully installed as a new cluster node.
The Solaris interface groups feature is disabled by default during Solaris software installation. Interface groups are not supported in a Sun Cluster configuration and should not be reenabled. See the ifconfig(1M) man page for more information about Solaris interface groups.
Are you installing a new node to an existing cluster?
If no, go to Step 37.
If yes, create mount points on the new node for all existing cluster file systems.
From another, active node of the cluster, display the names of all cluster file systems.
% mount | grep global | egrep -v node@ | awk '{print $1}' |
On the node you added to the cluster, create a mount point for each cluster file system in the cluster.
% mkdir -p mountpoint |
For example, if a file system name returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node being added to the cluster.
The mount points become active after you reboot the cluster in Step 39.
Is VERITAS Volume Manager (VxVM) installed on any nodes that are already in the cluster?
If yes, ensure that the same vxio number is used on the VxVM-installed nodes and that the vxio number is available for use on each of the nodes that do not have VxVM installed.
# grep vxio /etc/name_to_major vxio NNN |
If the vxio number is already in use on a node that does not have VxVM installed, free the number on that node by changing the /etc/name_to_major entry to use a different number.
If no, go to Step 37.
Install any Sun Cluster software patches.
See the Sun Cluster 3.0 5/02 Release Notes for the location of patches and installation instructions.
Do you intend to use dynamic reconfiguration?
To use dynamic reconfiguration in your cluster configuration, the servers must be supported to use dynamic reconfiguration with Sun Cluster software.
If yes, on each node add the following entry to the /etc/system file.
set kernel_cage_enable=1 |
This entry becomes effective after the next system reboot. See the Sun Cluster 3.0 12/01 System Administration Guide for procedures to perform dynamic reconfiguration tasks in a Sun Cluster configuration. See your server documentation for more information about dynamic reconfiguration.
If no, go to Step 39.
Did you add a new node to an existing cluster, or install Sun Cluster software patches that require you to reboot the entire cluster, or both?
If no, reboot the individual node if any patches you installed require a node reboot or if any other changes you made require a reboot to become active.
If yes, perform a reconfiguration reboot as instructed in the following steps.
From one node, shut down the cluster.
# scshutdown |
Do not reboot the first-installed node of the cluster until after the cluster is shut down.
Reboot each node in the cluster.
ok boot |
Until cluster install mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in install mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum and the entire cluster shuts down. Cluster nodes remain in install mode until the first time you run the scsetup(1M) command, during the procedure "How to Perform Post-Installation Setup" in the Sun Cluster 3.0 12/01 Software Installation Guide.
Set up the name service look-up order.
Go to "How to Configure the Name Service Switch" in the Sun Cluster 3.0 12/01 Software Installation Guide.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
For VERITAS Volume Manager, set your MANPATH to include the following path.
For VxVM 3.1 and earlier, use /opt/VRTSvxvm/man.
For VxVM 3.1.1 and later, use /opt/VRTS/man.
The following feature was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Perform this procedure if the installed node cannot join the cluster or if you need to correct configuration information, for example, the transport adapters.
If the node has already joined the cluster and is no longer in install mode (see Step 11 of "How to Perform Post-Installation Setup" in the Sun Cluster 3.0 12/01 Software Installation Guide), do not perform this procedure. Instead, go to "How to Uninstall Sun Cluster Software From a Cluster Node (5/02)".
Attempt to reinstall the node.
Certain failed installations can be corrected simply by repeating the Sun Cluster software installation on the node. If you have already tried to reinstall the node without success, proceed to Step 2 to uninstall Sun Cluster software from the node.
Become superuser on an active cluster member other than the node you will uninstall.
From the active cluster member, add the node you intend to uninstall to the cluster's node authentication list.
# /usr/cluster/bin/scconf -a -T node=nodename |
Add
Specifies authentication options
Specifies the name of the node to add to the authentication list
Alternately, you can use the scsetup(1M) utility. See "How to Add a Cluster Node to the Authorized Node List" in the Sun Cluster 3.0 12/01 System Administration Guide for procedures.
Become superuser on the node you intend to uninstall.
Reboot the node into non-cluster mode.
# shutdown -g0 -y -i0 ok boot -x |
Uninstall the node.
# cd / # /usr/cluster/bin/scinstall -r |
See the scinstall(1M) man page for more information.
Reinstall Sun Cluster software on the node.
Refer to TABLE 2-1 in the Sun Cluster 3.0 12/01 Software Installation Guide for the list of all installation tasks and the order in which to perform them.
The following information applies to this update release and all subsequent updates.
The following changes to Step 2, Step 4, and Step 8 were introduced in the Sun Cluster 3.0 5/02 update release and apply to this update and all subsequent updates to Sun Cluster 3.0 software.
Perform this procedure for each cluster file system you add.
Any data on the disks is destroyed when you create a file system. Be sure you specify the correct disk device name. If you specify the wrong device name, you will erase data that you might not intend to delete.
If you used SunPlex Manager to install data services, one or more cluster file systems already exist if there were sufficient shared disks on which to create the cluster file systems.
Ensure that volume manager software is installed and configured.
For volume manager installation procedures, see "Installing and Configuring Solstice DiskSuite Software" or "Installing and Configuring VxVM Software" in the Sun Cluster 3.0 12/01 Software Installation Guide.
Do you intend to install VERITAS File System (VxFS) software?
If no, go to Step 3.
If yes, perform the following steps.
Follow the procedures in your VxFS installation documentation to install VxFS software on each node of the cluster.
In the /etc/system file on each node, change the setting value for the following entry from 0x4000 to 0x6000.
set rpcmod:svc_default_stksize=0x6000 |
Sun Cluster software requires a minimum default stack size setting of 0x6000. Because VxFS installation changes this setting to 0x4000, you must manually change it back to 0x6000 after VxFS installation is complete.
Become superuser on any node in the cluster.
For faster file system creation, become superuser on the current primary of the global device you create a file system for.
Create a file system.
For a VxFS file system, follow procedures provided in your VxFS documentation.
For a UFS file system, use the newfs(1M) command.
# newfs raw-disk-device |
The following table shows examples of names for the raw-disk-device argument. Note that naming conventions differ for each volume manager.
Table 4-3 Sample Raw Disk Device Names
Volume Manager |
Sample Disk Device Name |
Description |
---|---|---|
Solstice DiskSuite |
/dev/md/oracle/rdsk/d1 |
Raw disk device d1 within the oracle diskset |
VERITAS Volume Manager |
/dev/vx/rdsk/oradg/vol01 |
Raw disk device vol01 within the oradg disk group |
None |
/dev/global/rdsk/d1s3 |
Raw disk device d1s3 |
On each node in the cluster, create a mount-point directory for the cluster file system.
A mount point is required on each node, even if the cluster file system will not be accessed on that node.
For ease of administration, create the mount point in the /global/device-group directory. This location enables you to easily distinguish cluster file systems, which are globally available, from local file systems.
# mkdir -p /global/device-group/mountpoint |
Name of the directory that corresponds to the name of the device group that contains the device
Name of the directory on which to mount the cluster file system
On each node in the cluster, add an entry to the /etc/vfstab file for the mount point.
Use the following required mount options.
Logging is required for all cluster file systems.
Solaris UFS logging - Use the global,logging mount options. See the mount_ufs(1M) man page for more information about UFS mount options.
The syncdir mount option is not required for UFS cluster file systems. If you specify syncdir, you are guaranteed POSIX-compliant file system behavior for the write() system call, in the sense that if a write() succeeds you are guaranteed that there is space on disk. If you do not specify syncdir, you will have the same behavior that is seen with UFS file systems. When you do not specify syncdir, performance of writes that allocate disk blocks, such as when appending data to a file, can significantly improve. However, in some cases, without syncdir you would not discover an out-of-space condition (ENOSPC) until you close a file. The case in which you will see ENOSPC on close is only during a very short time period after a failover. With syncdir (and POSIX behavior), the out-of-space condition would be discovered before the close.
Solstice DiskSuite trans metadevice - Use the global mount option (do not use the logging mount option). See your Solstice DiskSuite documentation for information about setting up trans metadevices.
VxFS logging - Use the global, log mount options. See the mount_vxfs(1M) man page for more information about VxFS mount options.
To automatically mount the cluster file system, set the mount at boot field to yes.
Ensure that, for each cluster file system, the information in its /etc/vfstab entry is identical on each node.
Ensure that the entries in each node's /etc/vfstab file list devices in the same order.
Check the boot order dependencies of the file systems.
For example, consider the scenario where phys-schost-1 mounts disk device d0 on /global/oracle, and phys-schost-2 mounts disk device d1 on /global/oracle/logs. With this configuration, phys-schost-2 can boot and mount /global/oracle/logs only after phys-schost-1 boots and mounts /global/oracle.
See the vfstab(4) man page for details.
On any node in the cluster, verify that mount points exist and /etc/vfstab file entries are correct on all nodes of the cluster.
# sccheck |
If no errors occur, nothing is returned.
From any node in the cluster, mount the cluster file system.
# mount /global/device-group/mountpoint |
For VERITAS File System (VxFS), mount the file system from the current master of device-group to ensure that the file system mounts successfully. In addition, unmount a VxFS file system from the current master of device-group to ensure that the file system unmounts successfully.
On each node of the cluster, verify that the cluster file system is mounted.
You can use either the df(1M) or mount(1M) command to list mounted file systems.
To manage a VxFS cluster file system in a Sun Cluster environment, run administrative commands only from the primary node on which the VxFS cluster file system is mounted.
Are your cluster nodes connected to more than one public subnet?
If yes, go to "How to Configure Additional Public Network Adapters" in the Sun Cluster 3.0 12/01 Software Installation Guide to configure additional public network adapters.
If no, go to "How to Configure Public Network Management (PNM)" in the Sun Cluster 3.0 12/01 Software Installation Guideto configure PNM and set up NAFO groups.
The following example creates a UFS cluster file system on the Solstice DiskSuite metadevice /dev/md/oracle/rdsk/d1.
# newfs /dev/md/oracle/rdsk/d1 ... (on each node) # mkdir -p /global/oracle/d1 # vi /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # /dev/md/oracle/dsk/d1 /dev/md/oracle/rdsk/d1 /global/oracle/d1 ufs 2 yes global,logging (save and exit) (on one node) # sccheck # mount /global/oracle/d1 # mount ... /global/oracle/d1 on /dev/md/oracle/dsk/d1 read/write/setuid/global/logging/ largefiles on Sun Oct 3 08:56:16 2000 |
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Perform this task to create or modify the NTP configuration file after you install Sun Cluster software. You must also modify the NTP configuration file when you add a node to an existing cluster and when you change the private hostname of a node in the cluster.
The primary requirement when you configure NTP, or any time synchronization facility within the cluster, is that all cluster nodes must be synchronized to the same time. Consider accuracy of time on individual nodes to be of secondary importance to the synchronization of time among nodes. You are free to configure NTP as best meets your individual needs, as long as this basic requirement for synchronization is met. See Sun Cluster 3.0 12/01 Concepts for further information about cluster time. See the /etc/inet/ntp.cluster template file for additional guidelines on how to configure NTP for a Sun Cluster configuration.
Did you install your own /etc/inet/ntp.conf file before you installed Sun Cluster software?
Become superuser on a cluster node.
Do you have your own /etc/inet/ntp.conf file to install on the cluster nodes?
If yes, copy your /etc/inet/ntp.conf file to each node of the cluster, then skip to Step 6.
All cluster nodes must be synchronized to the same time.
If no, go to Step 4 to edit the /etc/inet/ntp.conf.cluster file. Sun Cluster software creates this file as the NTP configuration file if an /etc/inet/ntp.conf file is not found during Sun Cluster installation. Do not rename the ntp.conf.cluster file as ntp.conf.
On one node of the cluster, edit the private hostnames in the /etc/inet/ntp.conf.cluster file.
If /etc/inet/ntp.conf.cluster does not exist on the node, you might have an /etc/inet/ntp.conf file from an earlier installation of Sun Cluster software. If so, perform the following edits on that ntp.conf file.
Ensure that an entry exists for the private hostname of each cluster node.
Remove any unused private hostnames.
If the ntp.conf.cluster file contains non-existent private hostnames, when a node is rebooted the system will generate error messages when the node attempts to contact those non-existent private hostnames.
If you changed a node's private hostname, ensure that the NTP configuration file contains the new private hostname.
If necessary, make other modifications to meet your NTP requirements.
Copy the NTP configuration file to all nodes in the cluster.
The contents of the ntp.conf.cluster file must be identical on all cluster nodes.
Stop the NTP daemon on each node.
Wait for the stop command to complete successfully on each node before you proceed to Step 7.
# /etc/init.d/xntpd stop |
Restart the NTP daemon on each node.
For ntp.conf.cluster, run the following command.
# /etc/init.d/xntpd.cluster start |
The xntpd.cluster startup script first looks for the /etc/inet/ntp.conf file. If that file exists, the script exits immediately without starting the NTP daemon. If ntp.conf does not exist but ntp.conf.cluster does exist, the NTP daemon is started using ntp.conf.cluster as the NTP configuration file.
For ntp.conf, run the following command.
# /etc/init.d/xntpd start |
Do you intend to use Sun Management Center to configure resource groups or monitor the cluster?
If yes, go to "Installing the Sun Cluster Module for Sun Management Center" in the Sun Cluster 3.0 12/01 Software Installation Guide.
If no, install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation supplied with the application software and the Sun Cluster 3.0 12/01 Data Services Installation and Configuration Guide.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
After Step 2 - Perform the following step after Step 2, as the new Step 3. The original Step 3 becomes the new Step 4.
On each Sun Management Center agent machine (cluster node), ensure that the scsymon_srv daemon is running.
# ps -ef | grep scsymon_srv |
If any cluster node is not already running the scsymon_srv daemon, start the daemon on that node.
# /usr/cluster/lib/scsymon/scsymon_srv |
The following information applies to this update release and all subsequent updates.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
CD-ROM path - Change all occurrences of the framework CD-ROM path to /cdrom/suncluster_3_0_u3. This applies to the Sun Cluster 3.0 5/02 CD-ROM.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
CD-ROM path - Change all occurrences of the data services CD-ROM path to /cdrom/scdataservices_3_0_u3. This applies to the Sun Cluster 3.0 Agents 5/02 CD-ROM.
The following information applies to this update release and all subsequent updates.
For Sun Cluster 3.0 5/02 on the Solaris 9 operating environment, information and procedures for Solstice DiskSuite software apply as well to Solaris Volume Manager software unless alternative information for Solaris 9 is specified.
The following changes were introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Metadevice-name maximum - The following are corrections to Step 1 and Step 2. The correct maximum number of metadevice names each Solstice DiskSuite (Solaris 8) diskset can have is 1024. For Solaris Volume Manager (Solaris 9), the maximum is 8192 metadevice names per diskset, as documented in the Sun Cluster 3.0 12/01 Software Installation Guide.
Calculate the largest metadevice name you need for any diskset in the cluster.
Each diskset can have a maximum of 1024 metadevice names. You will supply this calculated value for the nmd field.
Calculate the quantity of metadevice names you need for each diskset.
If you use local metadevices, ensure that each local metadevice name is unique throughout the cluster and does not use the same name as any device ID (DID) in the cluster.
Choose a range of numbers to use exclusively for DID names and a range for each node to use exclusively for its local metadevice names. For example, DIDs would use names in the range from d1 to d100, local metadevices on node 1 would use names in the range from d100 to d199, local metadevices on node 2 would use d200 to d299, and so on.
Determine the largest of the metadevice names to be used in any diskset.
The quantity of metadevice names to set is based on the metadevice name value rather than on the actual quantity. For example, if your metadevice names range from d950 to d1000, Solstice DiskSuite software requires 1000 names, not 50.
Calculate the total expected number of disksets in the cluster, then add one for private disk management.
The cluster can have a maximum of 31 disksets, not including the diskset for private disk management. The default number of disksets is 4. You will supply this calculated value for the md_nsets field.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
Perform this procedure for each diskset you create.
If you used SunPlex Manager to install Solstice DiskSuite, one to three disksets might already exist. See "Using SunPlex Manager to Install Sun Cluster Software (5/02)" for information about the metasets created by SunPlex Manager.
Do you intend to create more than three disksets in the cluster?
Ensure that the value of the md_nsets variable is set high enough to accommodate the total number of disksets you intend to create in the cluster.
On any node of the cluster, check the value of the md_nsets variable in the /kernel/drv/md.conf file.
If the total number of disksets in the cluster will be greater than the existing value of md_nsets minus one, on each node increase the value of md_nsets to the desired value.
The maximum permissible number of disksets is one less than the value of md_nsets. The maximum possible value of md_nsets is 32.
Ensure that the /kernel/drv/md.conf file is identical on each node of the cluster.
Failure to follow this guideline can result in serious Solstice DiskSuite errors and possible loss of data.
From one node, shut down the cluster.
# scshutdown -g0 -y |
Reboot each node of the cluster.
ok> boot |
On each node in the cluster, run the devfsadm(1M) command.
You can run this command on all nodes in the cluster at the same time.
From one node of the cluster, run the scgdevs(1M) command.
On each node, verify that the scgdevs command has completed before you attempt to create any disksets.
The scgdevs command calls itself remotely on all nodes, even when the command is run from just one node. To determine whether the scgdevs command has completed processing, run the following command on each node of the cluster.
% ps -ef | grep scgdevs |
Ensure that the diskset you intend to create meets one of the following requirements.
If configured with exactly two disk strings, the diskset must connect to exactly two nodes and use exactly two mediator hosts, which must be the same two hosts used for the diskset. See "Mediators Overview" in the Sun Cluster 3.0 12/01 Software Installation Guide for details on how to set up mediators.
If configured with more than two disk strings, ensure that for any two disk strings S1 and S2, the sum of the number of disks on those strings exceeds the number of disks on the third string S3. Stated as a formula, the requirement is that count(S1) + count(S2) > count(S3).
Ensure that root is a member of group 14.
# vi /etc/group ... sysadmin::14:root ... |
Ensure that the local metadevice state database replicas exist.
For instructions, see "How to Create Metadevice State Database Replicas" in the Sun Cluster 3.0 12/01 Software Installation Guide.
Become superuser on the cluster node that will master the diskset.
Create the diskset.
This command also registers the diskset as a Sun Cluster disk device group.
# metaset -s setname -a -h node1 node2 |
Specifies the diskset name
Adds (creates) the diskset
Specifies the name of the primary node to master the diskset
Specifies the name of the secondary node to master the diskset
Verify the status of the new diskset.
# metaset -s setname |
Add drives to the diskset.
Go to "Adding Drives to a Diskset" in the Sun Cluster 3.0 12/01 Software Installation Guide.
The following command creates two disksets, dg-schost-1 and dg-schost-2, with the nodes phys-schost-1 and phys-schost-2 assigned as the potential primaries.
# metaset -s dg-schost-1 -a -h phys-schost-1 phys-schost-2 # metaset -s dg-schost-2 -a -h phys-schost-1 phys-schost-2 |
The following information applies to this update release and all subsequent updates.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
This procedure uses the scvxinstall(1M) command to install VxVM software and encapsulate the root disk in one operation.
If you intend to create the rootdg disk group on local, non-root disks, go instead to "How to Install VERITAS Volume Manager Software Only (5/02)".
Perform this procedure on each node that you intend to install with VxVM. You can install VERITAS Volume Manager (VxVM) on all nodes of the cluster, or on only those nodes that are physically connected to the storage device(s) VxVM will manage.
Ensure that the cluster meets the following prerequisites.
All nodes in the cluster are running in cluster mode.
The root disk of the node you install has two free (unassigned) partitions.
Become superuser on a node you intend to install with VxVM.
Add all nodes in the cluster to the cluster node authentication list.
Start the scsetup(1M) utility.
# scsetup |
The Main Menu is displayed.
To access the New Nodes Menu, type 6 at the Main Menu.
To add a node to the authorized list, type 3 at the New Nodes Menu.
Specify the name of a machine which may add itself.
Follow the prompts to add the node's name to the cluster. You will be asked for the name of the node to be added.
Verify that the task has been performed successfully.
The scsetup utility prints a Command completed successfully message if it completes the task without error.
Repeat Step c through Step e for each node of the cluster until all cluster nodes are added to the node authentication list.
Quit the scsetup utility.
Insert the VxVM CD-ROM into the CD-ROM drive on the node.
Start scvxinstall in interactive mode.
Press Control-C at any time to abort the scvxinstall command.
# scvxinstall |
See the scvxinstall(1M) man page for more information.
When prompted whether to encapsulate root, type yes.
Do you want Volume Manager to encapsulate root [no]? y |
When prompted, provide the location of the VxVM CD-ROM.
If the appropriate VxVM CD-ROM is found, the location is displayed as part of the prompt within brackets. Press Enter to accept this default location.
Where is the volume manager cdrom [default]? |
If the VxVM CD-ROM is not found, the prompt is displayed without a default location. Type the location of the CD-ROM or CD-ROM image.
Where is the volume manager cdrom? |
When prompted, type your VxVM license key.
Please enter license key: license |
The scvxinstall command automatically performs the following tasks.
Disables Dynamic Multipathing (DMP)
Although the scvxinstall utility disables Dynamic Multipathing (DMP) at the start of installation processing, DMP is automatically re-enabled by VxVM version 3.1.1 or later when the VRTSvxvm package is installed. Earlier versions of VxVM must still run with DMP disabled.
Installs the VRTSvxvm, VRTSvmdev, and VRTSvmman packages, and installs the VRTSlic package if installing VxVM 3.2 or later
Selects a cluster-wide vxio driver major number
Creates a rootdg disk group by encapsulating the root disk
Updates the /global/.devices entry in the /etc/vfstab file
See the scvxinstall(1M) man page for further details.
During installation there are two automatic reboots. After all installation tasks are completed, scvxinstall automatically reboots the node the second time unless you press Control-C when prompted. If you press Control-C to abort the second reboot, you must reboot the node later to complete VxVM installation.
If you intend to enable the VxVM cluster feature, run the vxlicense command to supply the cluster feature license key.
See your VxVM documentation for information about the vxlicense command.
(Optional) Install the VxVM GUI.
# pkgadd VRTSvmsa |
See your VxVM documentation for information about the VxVM GUI.
Eject the CD-ROM.
Install any VxVM patches.
See the Sun Cluster 3.0 5/02 Release Notes for the location of patches and installation instructions.
(Optional) If you prefer not to have VxVM man pages reside on the cluster node, remove the man-page package.
# pkgrm VRTSvmman |
Do you intend to install VxVM on another node?
Are there one or more nodes that you do not intend to install with VxVM?
If you intend to enable the VxVM cluster feature, you must install VxVM on all nodes of the cluster.
Modify the /etc/name_to_major file on each non-VxVM node.
On a node installed with VxVM, determine the vxio major number setting.
# grep vxio /etc/name_to_major |
Become superuser on a node that you do not intend to install with VxVM.
Edit the /etc/name_to_major file and add an entry to set the vxio major number to NNN, the number derived in Step a.
# vi /etc/name_to_major vxio NNN |
Initialize the vxio entry.
# drvconfig -b -i vxio -m NNN |
Repeat Step b through Step d on all other nodes that you do not intend to install with VxVM.
When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.
Prevent any new machines from being added to the cluster.
Start the scsetup(1M) utility.
# scsetup |
The Main Menu is displayed.
To access the New Nodes Menu, type 6 at the Main Menu.
Type 1 at the New Nodes Menu.
Follow the scsetup prompts. This option tells the cluster to ignore all requests coming in over the public network from any new machine that tries to add itself to the cluster.
Quit the scsetup utility.
Do you intend to mirror the encapsulated root disk?
If yes, go to "How to Mirror the Encapsulated Root Disk" in the Sun Cluster 3.0 12/01 Software Installation Guide.
If no, go to "How to Create and Register a Disk Group" in the Sun Cluster 3.0 12/01 Software Installation Guide.
If you later need to unencapsulate the root disk, follow the procedures in "How to Unencapsulate the Root Disk" in the Sun Cluster 3.0 12/01 Software Installation Guide.
The following change was introduced in the Sun Cluster 3.0 5/02 update release and applies to this update and all subsequent updates to Sun Cluster 3.0 software.
This procedure uses the scvxinstall command to install VERITAS Volume Manager (VxVM) software only.
To create the rootdg disk group by encapsulating the root disk, do not use this procedure. Instead, go to "How to Install VERITAS Volume Manager Software and Encapsulate the Root Disk (5/02)" to install VxVM software and encapsulate the root disk in one operation.
Perform this procedure on each node that you want to install with VxVM. You can install VxVM on all nodes of the cluster, or on only those nodes that are physically connected to the storage device(s) VxVM will manage.
Ensure that all nodes in the cluster are running in cluster mode.
Become superuser on a cluster node you intend to install with VxVM.
Add all nodes in the cluster to the cluster node authentication list.
Start the scsetup(1M) utility.
# scsetup |
The Main Menu is displayed.
To access the New Nodes Menu, type 6 at the Main Menu.
To add a node to the authorized list, type 3 at the New Nodes Menu.
Specify the name of a machine which may add itself.
Follow the prompts to add the node's name to the cluster. You will be asked for the name of the node to be added.
Verify that the task has been performed successfully.
The scsetup utility prints a Command completed successfully message if it completes the task without error.
Repeat Step c through Step e for each node of the cluster until all cluster nodes are added to the node authentication list.
Quit the scsetup utility.
Insert the VxVM CD-ROM into the CD-ROM drive on the node.
Start scvxinstall in interactive installation mode.
# scvxinstall -i |
The scvxinstall command automatically performs the following tasks.
Disables Dynamic Multipathing (DMP)
Although the scvxinstall utility disables Dynamic Multipathing (DMP) at the start of installation processing, DMP is automatically re-enabled by VxVM version 3.1.1 or later when the VRTSvxvm package is installed. Earlier versions of VxVM must still run with DMP disabled.
Installs the VRTSvxvm, VRTSvmdev, and VRTSvmman packages, and installs the VRTSlic package if installing VxVM 3.2 or later
Selects a cluster-wide vxio driver major number
See the scvxinstall(1M) man page for information.
(Optional) Install the VxVM GUI.
# pkgadd VRTSvmsa |
See your VxVM documentation for information about the VxVM GUI.
Eject the CD-ROM.
Install any VxVM patches.
See the Sun Cluster 3.0 5/02 Release Notes for the location of patches and installation instructions.
(Optional) If you prefer not to have VxVM man pages reside on the cluster node, remove the man page package.
# pkgrm VRTSvmman |
Do you intend to install VxVM on another node?
Are there one or more nodes that you do not intend to install with VxVM?
If you intend to enable the VxVM cluster feature, you must install VxVM on all nodes of the cluster.
Modify the /etc/name_to_major file on each non-VxVM node.
On a node installed with VxVM, determine the vxio major number setting.
# grep vxio /etc/name_to_major |
Become superuser on a node that you do not intend to install with VxVM.
Edit the /etc/name_to_major file and add an entry to set the vxio major number to NNN, the number derived in Step a.
# vi /etc/name_to_major vxio NNN |
Initialize the vxio entry.
# drvconfig -b -i vxio -m NNN |
Repeat Step a through Step c on all other nodes that you do not intend to install with VxVM.
When you finish, each node of the cluster should have the same vxio entry in its /etc/name_to_major file.
Prevent any new machines from being added to the cluster.
Start the scsetup(1M) utility.
# scsetup |
The Main Menu is displayed.
To access the New Nodes Menu, type 6 at the Main Menu.
Type 1 at the New Nodes Menu.
Follow the scsetup prompts. This option tells the cluster to ignore all requests coming in over the public network from any new machine that tries to add itself to the cluster.
Quit the scsetup utility.
Create a rootdg disk group.
Go to "How to Create a rootdg Disk Group on a Non-Root Disk" in the Sun Cluster 3.0 12/01 Software Installation Guide.