Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster 3.3 3/13 |
1. Planning the Oracle Solaris Cluster Configuration
2. Installing Software on Global-Cluster Nodes
3. Establishing the Global Cluster
Establishing a New Global Cluster or New Global-Cluster Node
How to Configure Oracle Solaris Cluster Software on All Nodes (scinstall)
How to Configure Oracle Solaris Cluster Software on All Nodes (XML)
How to Install Oracle Solaris and Oracle Solaris Cluster Software (JumpStart)
How to Prepare the Cluster for Additional Global-Cluster Nodes
How to Change the Private Network Configuration When Adding Nodes or Private Networks
How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall)
How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (XML)
How to Update Quorum Devices After Adding a Node to a Global Cluster
How to Configure Quorum Devices
How to Verify the Quorum Configuration and Installation Mode
How to Change Private Hostnames
Configuring the Distribution of Resource Group Load Across Nodes
How to Configure Load Limits for a Node
How to Set Priority for a Resource Group
How to Set Load Factors for a Resource Group
How to Set Preemption Mode for a Resource Group
How to Concentrate Load Onto Fewer Nodes in the Cluster
How to Configure Network Time Protocol (NTP)
How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect
4. Configuring Solaris Volume Manager Software
5. Creating a Cluster File System
6. Creating Non-Global Zones and Zone Clusters
This section provides information and procedures to establish a new global cluster or to add a node to an existing cluster. Global-cluster nodes can be physical machines, (SPARC only) Oracle VM Server for SPARC I/O domains, or (SPARC only) Oracle VM Server for SPARC guest domains. A cluster can consist of a combination of any of these node types. Before you start to perform these tasks, ensure that you installed software packages for the Oracle Solaris OS, Oracle Solaris Cluster framework, and other products as described in Installing the Software.
The following task maps list the tasks to perform for either a new global cluster or a node added to an existing global cluster. Complete the procedures in the order that is indicated.
Table 3-1 Task Map: Establish a New Global Cluster
|
Table 3-2 Task Map: Add a Node to an Existing Global Cluster
|
Perform this procedure from one node of the global cluster to configure Oracle Solaris Cluster software on all nodes of the cluster.
Note - This procedure uses the interactive form of the scinstall command. To use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.
Ensure that Oracle Solaris Cluster software packages are installed on the node, either manually or by using the silent-mode form of the installer program, before you run the scinstall command. For information about running the installer program from an installation script, see Chapter 5, Installing in Silent Mode, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX.
Before You Begin
Perform the following tasks:
Ensure that the Oracle Solaris OS is installed to support Oracle Solaris Cluster software.
If Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
SPARC: If you are configuring Oracle VM Server for SPARC I/O domains or guest domains as cluster nodes, ensure that Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.
Ensure that Oracle Solaris Cluster software packages and patches are installed on each node. See How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
Determine which mode of the scinstall utility you will use, Typical or Custom.
For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the following configuration defaults.
|
Complete one of the following cluster configuration worksheets, depending on whether you run the scinstall utility in Typical mode or Custom mode.
Typical Mode Worksheet – If you will use Typical mode and accept all defaults, complete the following worksheet.
|
Custom Mode Worksheet – If you will use Custom mode and customize the configuration data, complete the following worksheet.
Note - If you are installing a single-node cluster, the scinstall utility automatically assigns the default private network address and netmask, even though the cluster does not use a private network.
|
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
Enable remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser to all cluster nodes.
Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.
During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.
After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.
phys-schost# /usr/cluster/bin/scinstall
*** Main Menu *** Please select from one of the following (*) options: * 1) Create a new cluster or add a cluster node * 2) Configure a cluster to be JumpStarted from this install server 3) Manage a dual-partition upgrade 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1
The New Cluster and Cluster Node Menu is displayed.
The Typical or Custom Mode menu is displayed.
The Create a New Cluster screen is displayed. Read the requirements, then press Control-D to continue.
The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(1CL) man page.
This feature automatically reboots a node if all monitored shared-disk paths fail, provided that at least one of the disks is accessible from a different node in the cluster.
Note - At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
phys-schost# clnode set -p reboot_on_path_failure=enabled
Specifies the property to set
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled …
To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.
exclude:lofs
The change to the /etc/system file becomes effective after the next system reboot.
Note - You cannot have LOFS enabled if you use HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for HA for NFS. If you choose to add HA for NFS on a highly available local file system, you must make one of the following configuration changes.
However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.
Disable LOFS.
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.
See The Loopback File System in System Administration Guide: Devices and File Systems for more information about loopback file systems.
Example 3-1 Configuring Oracle Solaris Cluster Software on All Nodes
The following example shows the scinstall progress messages that are logged as scinstall completes configuration tasks on the two-node cluster, schost. The cluster is installed from phys-schost-1 by using the scinstall utility in Typical Mode. The other cluster node is phys-schost-2. The adapter names are bge2 and bge3. The automatic selection of a quorum device is enabled.
Installation and Configuration Log file - /var/cluster/logs/install/scinstall.log.24747 Configuring global device using lofi on phys-schost-1: done Starting discovery of the cluster transport configuration. The Oracle Solaris Cluster software is already installed on "phys-schost-1". The Oracle Solaris Cluster software is already installed on "phys-schost-2". Starting discovery of the cluster transport configuration. The following connections were discovered: phys-schost-1:bge2 switch1 phys-schost-2:bge2 phys-schost-1:bge3 switch2 phys-schost-2:bge3 Completed discovery of the cluster transport configuration. Started cluster check on "phys-schost-1". Started cluster check on "phys-schost-2". cluster check completed with no errors or warnings for "phys-schost-1". cluster check completed with no errors or warnings for "phys-schost-2". Removing the downloaded files … done Configuring "phys-schost-2" … done Rebooting "phys-schost-2" … done Configuring "phys-schost-1" … done Rebooting "phys-schost-1" … Log file - /var/cluster/logs/install/scinstall.log.24747 Rebooting …
Troubleshooting
Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then rerun this procedure.
Next Steps
If you installed a single-node cluster, cluster establishment is complete. Go to Creating Cluster File Systems to install volume management software and configure the cluster.
If you installed a multiple-node cluster and chose automatic quorum configuration, postinstallation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
If you installed a multiple-node cluster and declined automatic quorum configuration, perform postinstallation setup. Go to How to Configure Quorum Devices.
If you intend to configure any quorum devices in your cluster, go to How to Configure Quorum Devices.
Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.
Perform this procedure to configure a new global cluster by using an XML cluster configuration file. The new cluster can be a duplication of an existing cluster that runs Oracle Solaris Cluster 3.3 3/13 software.
This procedure configures the following cluster components:
Cluster name
Cluster node membership
Cluster interconnect
Global devices
Before You Begin
Perform the following tasks:
Ensure that the Oracle Solaris OS is installed to support Oracle Solaris Cluster software.
If Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
Ensure that the Oracle Solaris OS is installed to support Oracle Solaris Cluster software.
If Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
SPARC: If you are configuring Oracle VM Server for SPARC I/O domains or guest domains as cluster nodes, ensure that Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.
Ensure that Oracle Solaris Cluster 3.3 3/13 software and patches are installed on each node that you will configure. See How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
phys-schost# /usr/sbin/clinfo -n
clinfo: node is not configured as part of acluster: Operation not applicable
This message indicates that Oracle Solaris Cluster software is not yet configured on the potential node.
The return of a node ID indicates that Oracle Solaris Cluster software is already configured on the node.
If the cluster is running an older version of Oracle Solaris Cluster software and you want to install Oracle Solaris Cluster 3.3 3/13 software, instead perform upgrade procedures in Oracle Solaris Cluster Upgrade Guide.
If Oracle Solaris Cluster software is not yet configured on any of the potential cluster nodes, proceed to Step 2.
Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.
During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.
After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.
phys-schost# cluster export -o clconfigfile
Specifies the output destination.
The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.
For more information, see the cluster(1CL) man page.
You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.
Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.
To establish a cluster, the following components must have valid values in the cluster configuration XML file:
Cluster name
Cluster nodes
Cluster transport
By default, the cluster is created with the global-devices namespace configured on a lofi device. If you instead need to use a dedicated file system on which to create the global devices, add the following property to the <propertyList> element for each node that will use a partition instead of a lofi device.
… <nodeList> <node name="node" id="N"> <propertyList> … <property name="globaldevfs" value="/filesystem-name"> … </propertyList> </node> …
If you are modifying configuration information that was exported from an existing cluster, some values that you must change to reflect the new cluster, such as node names, are used in the definitions of more than one cluster object.
See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.
phys-schost# /usr/share/src/xmllint --valid --noout clconfigfile
See the xmllint(1) man page for more information.
phys-schost# cluster create -i clconfigfile
Specifies the name of the cluster configuration XML file to use as the input source.
If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(1CL) man page.
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 3/13 Release Notes for the location of patches and installation instructions.
To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.
exclude:lofs
The change to the /etc/system file becomes effective after the next system reboot.
Note - You cannot have LOFS enabled if you use HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for HA for NFS. If you choose to add HA for NFS on a highly available local file system, you must make one of the following configuration changes.
However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.
Disable LOFS.
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.
See The Loopback File System in System Administration Guide: Devices and File Systems for more information about loopback file systems.
You must configure a quorum device if you created a two-node cluster. If you choose not to use the cluster configuration XML file to create a required quorum device, go instead to How to Configure Quorum Devices.
Follow instructions in How to Install and Configure Quorum Server Software.
See Oracle Solaris Cluster 3.3 3/13 With Network-Attached Storage Device Manual.
phys-schost# xmllint --valid --noout clconfigfile
phys-schost# clquorum add -i clconfigfile devicename
Specifies the name of the device to configure as a quorum device.
phys-schost# clquorum reset
phys-schost# claccess deny-all
Note - At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
phys-schost# clnode set -p reboot_on_path_failure=enabled
Specifies the property to set
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled …
Example 3-2 Configuring Oracle Solaris Cluster Software on All Nodes By Using an XML File
The following example duplicates the cluster configuration and quorum configuration of an existing two-node cluster to a new two-node cluster. The new cluster is installed with the Oracle Solaris 10 OS and is not configured with non-global zones. The cluster configuration is exported from the existing cluster node, phys-oldhost-1, to the cluster configuration XML file clusterconf.xml. The node names of the new cluster are phys-newhost-1 and phys-newhost-2. The device that is configured as a quorum device in the new cluster is d3.
The prompt name phys-newhost-N in this example indicates that the command is performed on both cluster nodes.
phys-newhost-N# /usr/sbin/clinfo -n clinfo: node is not configured as part of acluster: Operation not applicable phys-oldhost-1# cluster export -o clusterconf.xml Copy clusterconf.xml to phys-newhost-1 and modify the file with valid values phys-newhost-1# xmllint --valid --noout clusterconf.xml No errors are reported phys-newhost-1# cluster create -i clusterconf.xml phys-newhost-N# svcs multi-user-server STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default phys-newhost-1# clnode status Output shows that both nodes are online phys-newhost-1# clquorum add -i clusterconf.xml d3 phys-newhost-1# clquorum reset
Troubleshooting
Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then rerun this procedure.
Next Steps
Go to How to Verify the Quorum Configuration and Installation Mode.
See Also
After the cluster is fully established, you can duplicate the configuration of the other cluster components from the existing cluster. If you did not already do so, modify the values of the XML elements that you want to duplicate to reflect the cluster configuration you are adding the component to. For example, if you are duplicating resource groups, ensure that the <resourcegroupNodeList> entry contains the valid node names for the new cluster, and not the node names from the cluster that you duplicated unless the node names are the same.
To duplicate a cluster component, run the export subcommand of the object-oriented command for the cluster component that you want to duplicate. For more information about the command syntax and options, see the man page for the cluster object that you want to duplicate. The following table lists the cluster components that you can create from a cluster configuration XML file after the cluster is established and the man page for the command that you use to duplicate the component.
|
This procedure describes how to set up and use the scinstall(1M) custom JumpStart installation method. This method installs both Oracle Solaris OS and Oracle Solaris Cluster software on all global-cluster nodes and establishes the cluster. You can also use this procedure to add new nodes to an existing cluster.
Before You Begin
Perform the following tasks:
Ensure that the hardware setup is complete and connections are verified before you install Oracle Solaris software. See the Oracle Solaris Cluster hardware documentation and your server and storage device documentation for details on how to set up the hardware.
Determine the Ethernet address of each cluster node.
If you use a naming service, ensure that the following information is added to any naming services that clients use to access cluster services. See Public-Network IP Addresses for planning guidelines. See your Oracle Solaris system-administrator documentation for information about using Oracle Solaris naming services.
Address-to-name mappings for all public hostnames and logical addresses
The IP address and hostname of the JumpStart install server
Ensure that your cluster configuration planning is complete. See How to Prepare for Cluster Software Installation for requirements and guidelines.
On the server from which you will create the flash archive, ensure that all Oracle Solaris OS software, patches, and firmware that is necessary to support Oracle Solaris Cluster software is installed.
If Oracle Solaris software is already installed on the server, you must ensure that the Oracle Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
SPARC: If you are configuring Oracle VM Server for SPARC I/O domains or guest domains as cluster nodes, ensure that Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.
Ensure that Oracle Solaris Cluster software packages and patches are installed on the server from which you will create the flash archive. See How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
Determine which mode of the scinstall utility you will use, Typical or Custom. For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the following configuration defaults.
|
Complete one of the following cluster configuration worksheets, depending on whether you run the scinstall utility in Typical mode or Custom mode. See Planning the Oracle Solaris Cluster Environment for planning guidelines.
Typical Mode Worksheet – If you will use Typical mode and accept all defaults, complete the following worksheet.
|
Custom Mode Worksheet – If you will use Custom mode and customize the configuration data, complete the following worksheet.
Note - If you are installing a single-node cluster, the scinstall utility automatically uses the default private network address and netmask, even though the cluster does not use a private network.
|
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
Ensure that the JumpStart install server meets the following requirements.
The install server is on the same subnet as the cluster nodes, or on the Oracle Solaris boot server for the subnet that the cluster nodes use.
The install server is not itself a cluster node.
The install server installs a release of the Oracle Solaris OS that is supported by the Oracle Solaris Cluster software.
A custom JumpStart directory exists for JumpStart installation of Oracle Solaris Cluster software. This jumpstart-dir directory must meet the following requirements:
Contain a copy of the check utility.
Be NFS exported for reading by the JumpStart install server.
Each new cluster node is configured as a custom JumpStart installation client that uses the custom JumpStart directory that you set up for Oracle Solaris Cluster installation.
Follow the appropriate instructions for your software platform and OS version to set up the JumpStart install server. See Creating a Profile Server for Networked Systems in Oracle Solaris 10 1/13 Installation Guide: JumpStart Installations.
See also the setup_install_server(1M) and add_install_client(1M) man pages.
For more information, see How to Add a Node to an Existing Cluster in Oracle Solaris Cluster System Administration Guide.
If Oracle Solaris software is already installed on the server, you must ensure that the Oracle Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
Follow procedures in How to Install Oracle Solaris Software.
Follow the procedures in SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.
Follow procedures in How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 3/13 Release Notes for the location of patches and installation instructions.
machine# cacaoadm enable
Perform this step regardless of whether you are using a naming service. See Public-Network IP Addresses for a listing of Oracle Solaris Cluster components whose IP addresses you must add.
The following command removes configuration information from the web console. Some of this configuration information is specific to the installed system. You must remove this information before you create the flash archive. Otherwise, the configuration information that is transferred to the cluster node might prevent the web console from starting or from interacting correctly with the cluster node.
# /usr/share/webconsole/private/bin/wcremove -i console
After you install the unconfigured web console on the cluster node and start the web console for the first time, the web console automatically runs its initial configuration and uses information from the cluster node.
For more information about the wcremove command, see Oracle Java Web Console User Identity in Oracle Solaris Administration: Basic Administration.
Follow procedures in Chapter 3, Creating Flash Archives (Tasks), in Oracle Solaris 10 1/13 Installation Guide: Flash Archives (Creation and Installation).
machine# flarcreate -n name archive
Name to give the flash archive.
File name to give the flash archive, with the full path. By convention, the file name ends in .flar.
See Chapter 4, Managing Network File Systems (Overview), in System Administration Guide: Network Services for more information about automatic file sharing.
In the media path, replace arch with sparc or x86 and replace ver with 10 for Oracle Solaris 10.
installserver# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/ \ Solaris_ver/Tools/ installserver# ./scinstall
The scinstall Main Menu is displayed.
This option is used to configure custom JumpStart finish scripts. JumpStart uses these finish scripts to install the Oracle Solaris Cluster software.
*** Main Menu *** Please select from one of the following (*) options: * 1) Create a new cluster or add a cluster node * 2) Configure a cluster to be JumpStarted from this install server 3) Manage a dual-partition upgrade 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 2
The scinstall command stores your configuration information and copies the autoscinstall.class default class file in the /jumpstart-dir/autoscinstall.d/3.2/ directory. This file is similar to the following example.
install_type initial_install system_type standalone partitioning explicit filesys rootdisk.s0 free / filesys rootdisk.s1 750 swap filesys rootdisk.s3 512 /globaldevices filesys rootdisk.s7 20 cluster SUNWCuser add package SUNWman add
Modify entries as necessary to match configuration choices that you made when you installed the Oracle Solaris OS on the flash archive machine or when you ran the scinstall utility.
|
See archive_location Keyword in Oracle Solaris 10 1/13 Installation Guide: JumpStart Installations for information about valid values for retrieval_type and location when used with the archive_location keyword.
cluster SUNWCuser add package SUNWman add
The autoscinstall.class file installs the End User Oracle Solaris Software Group (SUNWCuser).
The following table lists Oracle Solaris packages that are required to support some Oracle Solaris Cluster functionality. These packages are not included in the End User Oracle Solaris Software Group. See Oracle Solaris Software Group Considerations for more information.
|
You can change the default class file in one of the following ways:
Edit the autoscinstall.class file directly. These changes are applied to all nodes in all clusters that use this custom JumpStart directory.
Update the rules file to point to other profiles, then run the check utility to validate the rules file.
As long as the Oracle Solaris OS installation profile meets minimum Oracle Solaris Cluster file-system allocation requirements, Oracle Solaris Cluster software places no restrictions on other changes to the installation profile. See System Disk Partitions for partitioning guidelines and requirements to support Oracle Solaris Cluster software.
For more information about JumpStart profiles, see Chapter 3, Preparing JumpStart Installations (Tasks), in Oracle Solaris 10 1/13 Installation Guide: JumpStart Installations.
Your own finish script runs after the standard finish script that is installed by the scinstall command. See Chapter 3, Preparing JumpStart Installations (Tasks), in Oracle Solaris 10 1/13 Installation Guide: JumpStart Installations for information about creating a JumpStart finish script.
See Step 15.
Create one node directory for each node in the cluster. Or, use this naming convention to create symbolic links to a shared finish script.
Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.
During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.
After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.
As superuser, use the following command to start the cconsole utility:
adminconsole# /opt/SUNWcluster/bin/cconsole clustername &
The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.
phys-schost# shutdown -g0 -y -i0
ok boot net - install
Note - Surround the dash (-) in the command with a space on each side.
Press any key to reboot. keystroke
After the initialization sequence completes, the BIOS Setup Utility screen appears.
The list of boot devices is displayed.
The lowest number to the right of the IBA boot choices corresponds to the lower Ethernet port number. The higher number to the right of the IBA boot choices corresponds to the higher Ethernet port number.
The boot sequence begins again. After further processing, the GRUB menu is displayed.
Note - If the Oracle Solaris JumpStart entry is the only entry listed, you can alternatively wait for the selection screen to time out. If you do not respond in 30 seconds, the system automatically continues the boot sequence.
After further processing, the installation type menu is displayed.
Note - If you do not type the number for Custom JumpStart before the 30–second timeout period ends, the system automatically begins the Oracle Solaris interactive installation.
JumpStart installs the Oracle Solaris OS and Oracle Solaris Cluster software on each node. When the installation is successfully completed, each node is fully installed as a new cluster node. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log. N file.
Note - If you do not interrupt the BIOS at this point, it automatically returns to the installation type menu. There, if no choice is typed within 30 seconds, the system automatically begins an interaction installation.
After further processing, the BIOS Setup Utility is displayed.
The list of boot devices is displayed.
The boot sequence begins again. No further interaction with the GRUB menu is needed to complete booting into cluster mode.
If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default
phys-schost# mount | grep global | egrep -v node@ | awk '{print $1}'
phys-schost-new# mkdir -p mountpoint
For example, if a file-system name that is returned by the mount command is /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the node that is being added to the cluster.
To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.
exclude:lofs
The change to the /etc/system file becomes effective after the next system reboot.
Note - You cannot have LOFS enabled if you use HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for HA for NFS. If you choose to add HA for NFS on a highly available local file system, you must make one of the following configuration changes.
However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.
Disable LOFS.
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.
See The Loopback File System in System Administration Guide: Devices and File Systems for more information about loopback file systems.
|
This entry becomes effective after the next system reboot.
The setting of this value enables you to reboot the node if you are unable to access a login prompt.
grub edit> kernel /platform/i86pc/multiboot kmdb
The following are some of the tasks that require a reboot:
Adding a new node to an existing cluster
Installing patches that require a node or cluster reboot
Making configuration changes that require a reboot to become active
phys-schost-1# cluster shutdown -y -g0 clustername
Note - Do not reboot the first-installed node of the cluster until after the cluster is shut down. Until cluster installation mode is disabled, only the first-installed node, which established the cluster, has a quorum vote. In an established cluster that is still in installation mode, if the cluster is not shut down before the first-installed node is rebooted, the remaining cluster nodes cannot obtain quorum. The entire cluster then shuts down.
Cluster nodes remain in installation mode until the first time that you run the clsetup command. You run this command during the procedure How to Configure Quorum Devices.
ok boot
When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter.
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.
The scinstall utility installs and configures all cluster nodes and reboots the cluster. The cluster is established when all nodes have successfully booted into the cluster. Oracle Solaris Cluster installation output is logged in a /var/cluster/logs/install/scinstall.log.N file.
phys-schost# smcwebserver start
For more information, see the smcwebserver(1M) man page.
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(1CL) man page.
Note - At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
phys-schost# clnode set -p reboot_on_path_failure=enabled
Specifies the property to set
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled …
Next Steps
If you added a node to a two-node cluster, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.
Otherwise, go to the next appropriate procedure:
If you installed a multiple-node cluster and chose automatic quorum configuration, postinstallation setup is complete. Go to How to Verify the Quorum Configuration and Installation Mode.
If you installed a multiple-node cluster and declined automatic quorum configuration, perform postinstallation setup. Go to How to Configure Quorum Devices.
If you added a new node to an existing cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.
If you added a new node to an existing cluster that does not use a quorum device, verify the state of the cluster. Go to How to Verify the Quorum Configuration and Installation Mode.
If you installed a single-node cluster, cluster establishment is complete. Go to Creating Cluster File Systems to install volume management software and configure the cluster.
Troubleshooting
Disabled scinstall option – If the JumpStart option of the scinstall command does not have an asterisk in front, the option is disabled. This condition indicates that JumpStart setup is not complete or that the setup has an error. To correct this condition, first quit the scinstall utility. Repeat Step 1 through Step 16 to correct JumpStart setup, then restart the scinstall utility.
Perform this procedure on existing global-cluster nodes to prepare the cluster for the addition of new cluster nodes.
Before You Begin
Perform the following tasks:
Ensure that all necessary hardware is installed.
Ensure that the host adapter is installed on the new node. See the Oracle Solaris Cluster 3.3 3/13 Hardware Administration Manual.
Verify that any existing cluster interconnects can support the new node. See the Oracle Solaris Cluster 3.3 3/13 Hardware Administration Manual.
Ensure that any additional storage is installed. See the appropriate Oracle Solaris Cluster storage manual.
phys-schost# clsetup
The Main Menu is displayed.
The clsetup utility displays the message Command completed successfully if the task is completed without error.
phys-schost# clinterconnect show
You must have at least two cables or two adapters configured before you can add a node.
phys-schost# clsetup
Follow the instructions to specify the name of the node to add to the cluster, the name of a transport adapter, and whether to use a transport switch.
phys-schost# clinterconnect show
The command output should show configuration information for at least two cluster interconnects.
phys-schost# cluster show-netprops
The output looks similar to the following:
=== Private Network === private_netaddr: 172.16.0.0 private_netmask: 255.255.240.0 max_nodes: 64 max_privatenets: 10 max_zoneclusters: 12
Go to How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall).
Go to How to Change the Private Network Configuration When Adding Nodes or Private Networks. You must shut down the cluster to change the private IP-address range. This involves switching each resource group offline, disabling all resources in the cluster, then rebooting into noncluster mode before you reconfigure the IP address range.
Next Steps
Configure Oracle Solaris Cluster software on the new cluster nodes. Go to How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall) or How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (XML).
Perform this task to change the global-cluster's private IP-address range to accommodate an increase in one or more of the following cluster components:
The number of nodes or non-global zones
The number of private networks
The number of zone clusters
You can also use this procedure to decrease the private IP-address range.
Note - This procedure requires you to shut down the entire cluster. If you need to change only the netmask, for example, to add support for zone clusters, do not perform this procedure. Instead, run the following command from a global-cluster node that is running in cluster mode to specify the expected number of zone clusters:
phys-schost# cluster set-netprops num_zoneclusters=N
This command does not require you to shut down the cluster.
Before You Begin
Ensure that remote shell (rsh(1M)) or secure shell (ssh(1)) access for superuser is enabled for all cluster nodes.
# clsetup
The clsetup Main Menu is displayed.
If the node contains non-global zones, any resource groups in the zones are also switched offline.
The Resource Group Menu is displayed.
# cluster status -t resource,resourcegroup
Limits output to the specified cluster object
Specifies resources
Specifies resource groups
# cluster shutdown -g0 -y
Specifies the wait time in seconds
Prevents the prompt that asks you to confirm a shutdown from being issued
ok boot -x
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.
The screen displays the edited command.
Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.
When run in noncluster mode, the clsetup utility displays the Main Menu for noncluster-mode operations.
The clsetup utility displays the current private-network configuration, then asks if you would like to change this configuration.
The clsetup utility displays the default private-network IP address, 172.16.0.0, and asks if it is okay to accept this default.
The clsetup utility will ask if it is okay to accept the default netmask. Skip to the next step to enter your response.
The clsetup utility will prompt for the new private-network IP address.
The clsetup utility displays the default netmask and then asks if it is okay to accept the default netmask.
The default netmask is 255.255.240.0. This default IP address range supports up to 64 nodes, 12 zone clusters, and 10 private networks in the cluster.
Then skip to the next step.
When you decline the default netmask, the clsetup utility prompts you for the number of nodes and private networks, and zone clusters that you expect to configure in the cluster.
From these numbers, the clsetup utility calculates two proposed netmasks:
The first netmask is the minimum netmask to support the number of nodes, private networks, and zone clusters that you specified.
The second netmask supports twice the number of nodes, private networks, and zone clusters that you specified, to accommodate possible future growth.
# shutdown -g0 -y
ok boot
When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter.
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.
# clsetup
The clsetup Main Menu is displayed.
The Resource Group Menu is displayed.
If the node contains non-global zones, also bring online any resource groups that are in those zones.
Type q to back out of each submenu, or press Ctrl-C.
Next Steps
To add a node to an existing cluster, go to one of the following procedures:
How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (scinstall)
How to Install Oracle Solaris and Oracle Solaris Cluster Software (JumpStart)
How to Configure Oracle Solaris Cluster Software on Additional Global-Cluster Nodes (XML)
To create a non-global zone on a cluster node, go to Configuring a Non-Global Zone on a Global-Cluster Node.
Perform this procedure to add a new node to an existing global cluster. To use JumpStart to add a new node, instead follow procedures in How to Install Oracle Solaris and Oracle Solaris Cluster Software (JumpStart).
Note - This procedure uses the interactive form of the scinstall command. To use the noninteractive forms of the scinstall command, such as when developing installation scripts, see the scinstall(1M) man page.
Ensure that Oracle Solaris Cluster software packages are installed on the node, either manually or by using the silent-mode form of the installer program, before you run the scinstall command. For information about running the installer program from an installation script, see Chapter 5, Installing in Silent Mode, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX.
Before You Begin
Perform the following tasks:
Ensure that the Oracle Solaris OS is installed to support Oracle Solaris Cluster software.
If Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
SPARC: If you are configuring Oracle VM Server for SPARC I/O domains or guest domains as cluster nodes, ensure that Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.
Ensure that Oracle Solaris Cluster software packages and patches are installed on the node. See How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
Ensure that the cluster is prepared for the addition of the new node. See How to Prepare the Cluster for Additional Global-Cluster Nodes.
Determine which mode of the scinstall utility you will use, Typical or Custom. For the Typical installation of Oracle Solaris Cluster software, scinstall automatically specifies the following configuration defaults.
|
Complete one of the following configuration planning worksheets. See Planning the Oracle Solaris OS and Planning the Oracle Solaris Cluster Environment for planning guidelines.
Typical Mode Worksheet – If you will use Typical mode and accept all defaults, complete the following worksheet.
|
Custom Mode Worksheet – If you will use Custom mode and customize the configuration data, complete the following worksheet.
|
Follow these guidelines to use the interactive scinstall utility in this procedure:
Interactive scinstall enables you to type ahead. Therefore, do not press the Return key more than once if the next menu screen does not appear immediately.
Unless otherwise noted, you can press Control-D to return to either the start of a series of related questions or to the Main Menu.
Default answers or answers to previous sessions are displayed in brackets ([ ]) at the end of a question. Press Return to enter the response that is in brackets without typing it.
phys-schost-new# /usr/cluster/bin/scinstall
The scinstall Main Menu is displayed.
*** Main Menu *** Please select from one of the following (*) options: * 1) Create a new cluster or add a cluster node 2) Configure a cluster to be JumpStarted from this install server 3) Manage a dual-partition upgrade 4) Upgrade this cluster node * 5) Print release information for this cluster node * ?) Help with menu options * q) Quit Option: 1
The New Cluster and Cluster Node Menu is displayed.
The scinstall utility configures the node and boots the node into the cluster.
phys-schost# eject cdrom
If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default
phys-schost# claccess deny-all
Alternately, you can use the clsetup utility. See How to Add a Node to an Existing Cluster in Oracle Solaris Cluster System Administration Guide for procedures.
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(1CL) man page.
phys-schost# showrev -p
Note - At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
phys-schost# clnode set -p reboot_on_path_failure=enabled
Specifies the property to set
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled …
To disable LOFS, add the following entry to the /etc/system file on each node of the cluster.
exclude:lofs
The change to the /etc/system file becomes effective after the next system reboot.
Note - You cannot have LOFS enabled if you use HA for NFS on a highly available local file system and have automountd running. LOFS can cause switchover problems for HA for NFS. If you choose to add HA for NFS on a highly available local file system, you must make one of the following configuration changes.
However, if you configure non-global zones in your cluster, you must enable LOFS on all cluster nodes. If HA for NFS on a highly available local file system must coexist with LOFS, use one of the other solutions instead of disabling LOFS.
Disable LOFS.
Disable the automountd daemon.
Exclude from the automounter map all files that are part of the highly available local file system that is exported by HA for NFS. This choice enables you to keep both LOFS and the automountd daemon enabled.
See The Loopback File System in System Administration Guide: Devices and File Systems for more information about loopback file systems.
Example 3-3 Configuring Oracle Solaris Cluster Software on an Additional Node
The following example shows the node phys-schost-3 added to the cluster schost. The sponsoring node is phys-schost-1.
*** Adding a Node to an Existing Cluster *** Fri Feb 4 10:17:53 PST 2005 scinstall -ik -C schost -N phys-schost-1 -A trtype=dlpi,name=bge2 -A trtype=dlpi,name=bge3 -m endpoint=:bge2,endpoint=switch1 -m endpoint=:bge3,endpoint=switch2 Checking device to use for global devices file system ... done Adding node "phys-schost-3" to the cluster configuration ... done Adding adapter "bge2" to the cluster configuration ... done Adding adapter "bge3" to the cluster configuration ... done Adding cable to the cluster configuration ... done Adding cable to the cluster configuration ... done Copying the config from "phys-schost-1" ... done Copying the postconfig file from "phys-schost-1" if it exists ... done Copying the Common Agent Container keys from "phys-schost-1" ... done Setting the node ID for "phys-schost-3" ... done (id=1) Setting the major number for the "did" driver ... Obtaining the major number for the "did" driver from "phys-schost-1" ... done "did" driver major number set to 300 Checking for global devices global file system ... done Updating vfstab ... done Verifying that NTP is configured ... done Initializing NTP configuration ... done Updating nsswitch.conf ... done Adding clusternode entries to /etc/inet/hosts ... done Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files Updating "/etc/hostname.hme0". Verifying that power management is NOT configured ... done Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done The "local-mac-address?" parameter setting has been changed to "true". Ensure network routing is disabled ... done Updating file ("ntp.conf.cluster") on node phys-schost-1 ... done Updating file ("hosts") on node phys-schost-1 ... done Rebooting ...
Troubleshooting
Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then rerun this procedure.
Next Steps
If you added a node to an existing cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.
Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.
Perform this procedure to configure a new global-cluster node by using an XML cluster configuration file. The new node can be a duplication of an existing cluster node that runs Oracle Solaris Cluster 3.3 3/13 software.
This procedure configures the following cluster components on the new node:
Cluster node membership
Cluster interconnect
Global devices
Before You Begin
Perform the following tasks:
Ensure that the Oracle Solaris OS is installed to support Oracle Solaris Cluster software.
If Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
SPARC: If you are configuring Oracle VM Server for SPARC I/O domains or guest domains as cluster nodes, ensure that Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.
Ensure that Oracle Solaris Cluster software packages and any necessary patches are installed on the node. See How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
Ensure that the cluster is prepared for the addition of the new node. See How to Prepare the Cluster for Additional Global-Cluster Nodes.
phys-schost-new# /usr/sbin/clinfo -n
Oracle Solaris Cluster software is not yet configured on the node. You can add the potential node to the cluster.
Oracle Solaris Cluster software is already a configured on the node. Before you can add the node to a different cluster, you must remove the existing cluster configuration information.
ok boot -x
For more information about GRUB based booting, see Booting an x86 Based System by Using GRUB (Task Map) in Oracle Solaris Administration: Basic Administration.
The screen displays the edited command.
Note - This change to the kernel boot parameter command does not persist over the system boot. The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again add the -x option to the kernel boot parameter command.
phys-schost-new# /usr/cluster/bin/clnode remove
phys-schost# clnode export -o clconfigfile
Specifies the output destination.
The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.
For more information, see the clnode(1CL) man page.
Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory.
See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.
phys-schost-new# xmllint --valid --noout clconfigfile
phys-schost-new# clnode add -n sponsornode -i clconfigfile
Specifies the name of an existing cluster member to act as the sponsor for the new node.
Specifies the name of the cluster configuration XML file to use as the input source.
Note - At initial configuration time, disk-path monitoring is enabled by default for all discovered devices.
phys-schost# clnode set -p reboot_on_path_failure=enabled
Specifies the property to set
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled …
Troubleshooting
Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to rerun this procedure. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then rerun this procedure.
Next Steps
If you added a node to a cluster that uses a quorum device, go to How to Update Quorum Devices After Adding a Node to a Global Cluster.
Otherwise, go to How to Verify the Quorum Configuration and Installation Mode.
If you added a node to a global cluster, you must update the configuration information of the quorum devices, regardless of whether you use shared disks, NAS devices, a quorum server, or a combination. To do this, you remove all quorum devices and update the global-devices namespace. You can optionally reconfigure any quorum devices that you still want to use. This registers the new node with each quorum device, which can then recalculate its vote count based on the new number of nodes in the cluster.
Any newly configured SCSI quorum devices will be set to SCSI-3 reservations.
Before You Begin
Ensure that you have completed installation of Oracle Solaris Cluster software on the added node.
phys-schost# cluster status -t node
Command output lists each quorum device and each node. The following example output shows the current SCSI quorum device, d3.
phys-schost# clquorum list d3 …
Perform this step for each quorum device that is configured.
phys-schost# clquorum remove devicename
Specifies the name of the quorum device.
If the removal of the quorum devices was successful, no quorum devices are listed.
phys-schost# clquorum status
phys-schost# cldevice populate
Note - This step is necessary to prevent possible node panic.
The cldevice populate command executes remotely on all nodes, even through the command is issued from just one node. To determine whether the cldevice populate command has completed processing, run the following command on each node of the cluster.
phys-schost# ps -ef | grep scgdevs
You can configure the same device that was originally configured as the quorum device or choose a new shared device to configure.
Otherwise, skip to Step c.
phys-schost# cldevice list -v
Output resembles the following:
DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t0d0 d2 phys-schost-1:/dev/rdsk/c0t6d0 d3 phys-schost-2:/dev/rdsk/c1t1d0 d3 phys-schost-1:/dev/rdsk/c1t1d0 …
phys-schost# clquorum add -t type devicename
Specifies the type of quorum device. If this option is not specified, the default type shared_disk is used.
phys-schost# clquorum list
Output should list each quorum device and each node.
Example 3-4 Updating SCSI Quorum Devices After Adding a Node to a Two-Node Cluster
The following example identifies the original SCSI quorum device d2, removes that quorum device, lists the available shared devices, updates the global-device namespace, configures d3 as a new SCSI quorum device, and verifies the new device.
phys-schost# clquorum list d2 phys-schost-1 phys-schost-2 phys-schost# clquorum remove d2 phys-schost# clquorum status … --- Quorum Votes by Device --- Device Name Present Possible Status ----------- ------- -------- ------ phys-schost# cldevice list -v DID Device Full Device Path ---------- ---------------- … d3 phys-schost-2:/dev/rdsk/c1t1d0 d3 phys-schost-1:/dev/rdsk/c1t1d0 … phys-schost# cldevice populate phys-schost# ps -ef - grep scgdevs phys-schost# clquorum add d3 phys-schost# clquorum list d3 phys-schost-1 phys-schost-2
Next Steps
Go to How to Verify the Quorum Configuration and Installation Mode.
Note - You do not need to configure quorum devices in the following circumstances:
You chose automatic quorum configuration during Oracle Solaris Cluster software configuration.
You installed a single-node global cluster.
You added a node to an existing global cluster and already have sufficient quorum votes assigned.
Instead, proceed to How to Verify the Quorum Configuration and Installation Mode.
Perform this procedure one time only, after the new cluster is fully formed. Use this procedure to assign quorum votes and then to remove the cluster from installation mode.
Before You Begin
Perform the following preparations to configure a quorum server or a NAS device as a quorum device.
Quorum servers – To configure a quorum server as a quorum device, do the following:
Install the Quorum Server software on the quorum server host machine and start the quorum server. For information about installing and starting the quorum server, see How to Install and Configure Quorum Server Software.
Ensure that network switches that are directly connected to cluster nodes meet one of the following criteria:
The switch supports Rapid Spanning Tree Protocol (RSTP).
Fast port mode is enabled on the switch.
One of these features is required to ensure immediate communication between cluster nodes and the quorum server. If this communication is significantly delayed by the switch, the cluster interprets this prevention of communication as loss of the quorum device.
Have available the following information:
A name to assign to the configured quorum device
The IP address of the quorum server host machine
The port number of the quorum server
NAS devices – To configure a network-attached storage (NAS) device as a quorum device, install the NAS device hardware and software. See Oracle Solaris Cluster 3.3 3/13 With Network-Attached Storage Device Manual and your device documentation for requirements and installation procedures for NAS hardware and software.
You intend to use a quorum server.
The public network uses variable-length subnet masking, also called classless inter domain routing (CIDR).
If you use a quorum server but the public network uses classful subnets, as defined in RFC 791, you do not need to perform this step.
The following is an example entry that contains a public-network IP address and netmask:
10.11.30.0 255.255.255.0
nodename netmask + broadcast +
phys-schost# cluster status -t node
You do not need to be logged in as superuser to run this command.
phys-schost-1# cldevice list -v
Output resembles the following:
DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t0d0 d2 phys-schost-1:/dev/rdsk/c0t6d0 d3 phys-schost-2:/dev/rdsk/c1t1d0 d3 phys-schost-1:/dev/rdsk/c1t1d0 …
Note - Any shared disk that you choose must be qualified for use as a quorum device. See Quorum Devices for further information about choosing quorum devices.
Use the scdidadm output from Step a to identify the device–ID name of each shared disk that you are configuring as a quorum device. For example, the output in Step a shows that global device d3 is shared by phys-schost-1 and phys-schost-2.
phys-schost# cldevice show device === DID Device Instances === DID Device Name: /dev/did/rdsk/dN … default_fencing: nofencing …
Alternatively, you can simply disable fencing for the individual disk, which overrides for that disk whatever value the global_fencing property is set to. Skip to Step c to disable fencing for the individual disk.
phys-schost# cluster show -t global === Cluster === Cluster name: cluster … global_fencing: nofencing …
Note - If an individual disk has its default_fencing property set to global, the fencing for that individual disk is disabled only while the cluster-wide global_fencing property is set to nofencing or nofencing-noscrub. If the global_fencing property is changed to a value that enables fencing, then fencing becomes enabled for all disks whose default_fencing property is set to global.
phys-schost# cldevice set \ -p default_fencing=nofencing-noscrub device
phys-schost# cldevice show device
phys-schost# clsetup
The Initial Cluster Setup screen is displayed.
Note - If the Main Menu is displayed instead, initial cluster setup was already successfully performed. Skip to Step 11.
|
For a quorum server, also specify the following information:
The IP address of the quorum server host
The port number that is used by the quorum server to communicate with the cluster nodes
After the clsetup utility sets the quorum configurations and vote counts for the cluster, the message Cluster initialization is complete is displayed. The utility returns you to the Main Menu.
Next Steps
Verify the quorum configuration and that installation mode is disabled. Go to How to Verify the Quorum Configuration and Installation Mode.
Troubleshooting
Interrupted clsetup processing - If the quorum setup process is interrupted or fails to be completed successfully, rerun clsetup.
Changes to quorum vote count – If you later increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote by removing each quorum device and then add it back into the configuration, one quorum device at a time. For a two-node cluster, temporarily add a new quorum device before you remove and add back the original quorum device. Then remove the temporary quorum device. See the procedure “How to Modify a Quorum Device Node List” in Chapter 6, Administering Quorum, in Oracle Solaris Cluster System Administration Guide.
Unreachable quorum device – If you see messages on the cluster nodes that a quorum device is unreachable, or if you see failures of cluster nodes with the message CMM: Unable to acquire the quorum device, there might be a problem with the quorum device or the path to it. Check that both the quorum device and the path to it are functional.
If the problem persists, use a different quorum device. Or, if you want to use the same quorum device, increase the quorum timeout to a high value, as follows:
Note - For Oracle Real Application Clusters (Oracle RAC), do not change the default quorum timeout of 25 seconds. In certain split-brain scenarios, a longer timeout period might lead to the failure of Oracle RAC VIP failover, due to the VIP resource timing out. If the quorum device being used is not conforming with the default 25–second timeout, use a different quorum device.
1. Become superuser.
2. On each cluster node, edit the /etc/system file as superuser to set the timeout to a high value.
The following example sets the timeout to 700 seconds.
phys-schost# vi /etc/system … set cl_haci:qd_acquisition_timer=700
3. From one node, shut down the cluster.
phys-schost-1# cluster shutdown -g0 -y
4. Boot each node back into the cluster.
Changes to the /etc/system file are initialized after the reboot.
Perform this procedure to verify that quorum configuration was completed successfully, if quorum was configured, and that cluster installation mode is disabled.
You do not need to be superuser to run these commands.
phys-schost% clquorum list
Output lists each quorum device and each node.
phys-schost% cluster show -t global | grep installmode installmode: disabled
Cluster installation and creation is complete.
Next Steps
Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.
If you want to change any private hostnames, go to How to Change Private Hostnames.
If you did not install your own /etc/inet/ntp.conf file before you installed Oracle Solaris Cluster software, install or create the NTP configuration file. Go to How to Configure Network Time Protocol (NTP).
If you want to configure IPsec on the private interconnect, go to How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect.
To configure Solaris Volume Manager software, go to Chapter 4, Configuring Solaris Volume Manager Software.
To create cluster file systems, go to How to Create Cluster File Systems.
To create non-global zones on a node, go to How to Create a Non-Global Zone on a Global-Cluster Node.
Install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Oracle Solaris Cluster Data Services Planning and Administration Guide.
Before you put the cluster into production, make a baseline recording of the cluster configuration for future diagnostic purposes. Go to How to Record Diagnostic Data of the Cluster Configuration.
See Also
Make a backup of your cluster configuration.
An archived backup of your cluster configuration facilitates easier recovery of the your cluster configuration. For more information, see How to Back Up the Cluster Configuration in Oracle Solaris Cluster System Administration Guide.
Perform this task if you do not want to use the default private hostnames, clusternodenodeid-priv, that are assigned during Oracle Solaris Cluster software installation.
Note - Do not perform this procedure after applications and data services have been configured and have been started. Otherwise, an application or data service might continue to use the old private hostname after the hostname is renamed, which would cause hostname conflicts. If any applications or data services are running, stop them before you perform this procedure.
Perform this procedure on one active node of the cluster.
phys-schost# clsetup
The clsetup Main Menu is displayed.
The Private Hostname Menu is displayed.
Repeat for each private hostname to change.
phys-schost# clnode show -t node | grep privatehostname privatehostname: clusternode1-priv privatehostname: clusternode2-priv privatehostname: clusternode3-priv
Next Steps
Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.
If you did not install your own /etc/inet/ntp.conf file before you installed Oracle Solaris Cluster software, install or create the NTP configuration file. Go to How to Configure Network Time Protocol (NTP).
If you want to configure IPsec on the private interconnect, go to How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect.
To configure Solaris Volume Manager software, go to Chapter 4, Configuring Solaris Volume Manager Software.
To create cluster file systems, go to How to Create Cluster File Systems.
To create non-global zones on a node, go to How to Create a Non-Global Zone on a Global-Cluster Node.
Install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Oracle Solaris Cluster Data Services Planning and Administration Guide.
Before you put the cluster into production, make a baseline recording of the cluster configuration for future diagnostic purposes. Go to How to Record Diagnostic Data of the Cluster Configuration.
You can enable the automatic distribution of resource group load across nodes or zones by setting load limits. You assign load factors to resource groups, and the load factors correspond to the defined load limits of the nodes.
The default behavior is to distribute resource group load evenly across all the available nodes. Each resource group is started on a node from its node list. The Resource Group Manager (RGM) chooses a node that best satisfies the configured load distribution policy. As resource groups are assigned to nodes by the RGM, the resource groups' load factors on each node are summed up to provide a total load. The total load is then compared against that node's load limits.
You can configure load limits in a global cluster or a zone cluster.
The factors you set to control load distribution on each node include load limits, resource group priority, and preemption mode. In the global cluster, you can set the Concentrate_load property to choose the preferred load distribution policy: to concentrate resource group load onto as few nodes as possible without exceeding load limits or to spread the load out as evenly as possible across all available nodes. The default behavior is to spread out the resource group load. Each resource group is still limited to running only on nodes in its node list, regardless of load factor and load limit settings.
Note - You can use the command line, the Oracle Solaris Cluster Manager interface, or the clsetup utility to configure load distribution for resource groups. The following procedure illustrates how to configure load distribution for resource groups using the clsetup utility. For instructions on using the command line to perform these procedures, see Configuring Load Limits in Oracle Solaris Cluster System Administration Guide.
This section contains the following procedures:
Each cluster node or zone can have its own set of load limits. You assign load factors to resource groups, and the load factors correspond to the defined load limits of the nodes. You can set soft load limits (which can be exceeded) or hard load limits (which cannot be exceeded).
phys-schost# clsetup
The clsetup menu is displayed.
The Other Cluster Tasks Menu is displayed.
The Manage Resource Group Load Distribution Menu is displayed.
The Manage load limits Menu is displayed.
You can create a load limit, modify a load limit, or delete a load limit.
If you want to set a load limit on a second node, select the option number that corresponds to the second node and press the Return key. After you have selected all the nodes where you want to configure load limits, type q and press the Return key.
For example, type mem_load as the name of a load limit.
If you typed yes, type the soft limit value and press Enter.
If you typed yes, type the hard limit value and press Enter.
The message Command completed successfully is displayed, along with the soft and hard load limits for the nodes you selected. Press the Return key to continue.
Return to the previous menu by typing q and pressing the Return key.
You can configure a resource group to have a higher priority so that it is less likely to be displaced from a specific node. If load limits are exceeded, lower-priority resource groups might be forced offline.
phys-schost# clsetup
The clsetup menu is displayed.
The Other Cluster Tasks Menu is displayed.
The Manage Resource Group Load Distribution Menu is displayed.
The Set the Priority of a Resource Group Menu is displayed.
The existing Priority value is displayed. The default Priority value is 500.
The Manage Resource Group Load Distribution Menu is displayed.
A load factor is a value that you assign to the load on a load limit. Load factors are assigned to a resource group, and those load factors correspond to the defined load limits of the nodes.
phys-schost# clsetup
The clsetup menu is displayed.
The Other Cluster Tasks Menu is displayed.
The Manage Resource Group Load Distribution Menu is displayed.
The Set the load factors of a Resource Group Menu is displayed.
For example, you can set a load factor called mem_load on the resource group you selected by typing mem_load@50. Press Ctrl-D when you are done.
The Manage Resource Group Load Distribution Menu is displayed.
The preemption_mode property determines if a resource group will be preempted from a node by a higher-priority resource group because of node overload. The property indicates the cost of moving a resource group from one node to another.
phys-schost# clsetup
The clsetup menu is displayed.
The Other Cluster Tasks Menu is displayed.
The Manage Resource Group Load Distribution Menu is displayed.
The Set the Preemption Mode of a Resource Group Menu is displayed.
If the resource group has a preemption mode set, it is displayed, similar to the following:
The preemption mode property of "rg11" is currently set to the following: preemption mode: Has_Cost
The three choices are Has_cost, No_cost, or Never.
The Manage Resource Group Load Distribution Menu is displayed.
Setting the Concentrate_load property to false causes the cluster to spread resource group loads evenly across all available nodes. If you set this property to True, the cluster attempts to concentrate resource group load on the fewest possible nodes without exceeding load limits. By default, the Concentrate_load property is set to False. You can only set the Concentrate_load property in a global cluster; you cannot set this property in a zone cluster. In a zone cluster, the default setting is always False.
phys-schost# clsetup
The clsetup menu is displayed.
The Other Cluster Tasks Menu is displayed.
The Set the Concentrate Load Property of the Cluster Menu is displayed.
The current value of TRUE or FALSE is displayed.
The Other Cluster Tasks Menu is displayed.
Note - If you installed your own /etc/inet/ntp.conf file before you installed Oracle Solaris Cluster software, you do not need to perform this procedure. Determine your next step:
Perform this task to create or modify the NTP configuration file after you perform any of the following tasks:
Install Oracle Solaris Cluster software
Add a node to an existing global cluster
Change the private hostname of a node in the global cluster
If you added a node to a single-node cluster, you must ensure that the NTP configuration file that you use is copied to the original cluster node as well as to the new node.
Note - Do not rename the ntp.conf.cluster file as ntp.conf.
If the /etc/inet/ntp.conf.cluster file does not exist on the node, you might have an /etc/inet/ntp.conf file from an earlier installation of Oracle Solaris Cluster software. Oracle Solaris Cluster software creates the /etc/inet/ntp.conf.cluster file as the NTP configuration file if an /etc/inet/ntp.conf file is not already present on the node. If so, perform the following edits instead on that ntp.conf file.
If you changed any node's private hostname, ensure that the NTP configuration file contains the new private hostname.
The contents of the NTP configuration file must be identical on all cluster nodes.
Wait for the command to complete successfully on each node before you proceed to Step 5.
phys-schost# svcadm disable ntp
phys-schost# /etc/init.d/xntpd.cluster start
The xntpd.cluster startup script first looks for the /etc/inet/ntp.conf file.
If the ntp.conf file exists, the script exits immediately without starting the NTP daemon.
If the ntp.conf file does not exist but the ntp.conf.cluster file does exist, the script starts the NTP daemon. In this case, the script uses the ntp.conf.cluster file as the NTP configuration file.
phys-schost# svcadm enable ntp
Next Steps
Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.
If you want to configure IPsec on the private interconnect, go to How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect.
To configure Solaris Volume Manager software, go to Chapter 4, Configuring Solaris Volume Manager Software.
To create cluster file systems, go to How to Create Cluster File Systems.
To create non-global zones on a node, go to How to Create a Non-Global Zone on a Global-Cluster Node.
Install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Oracle Solaris Cluster Data Services Planning and Administration Guide.
Before you put the cluster into production, make a baseline recording of the cluster configuration for future diagnostic purposes. Go to How to Record Diagnostic Data of the Cluster Configuration.
You can configure IP Security Architecture (IPsec) for the clprivnet interface to provide secure TCP/IP communication on the cluster interconnect.
For information about IPsec, see Part IV, IP Security, in Oracle Solaris Administration: IP Services and the ipsecconf(1M) man page. For information about the clprivnet interface, see the clprivnet(7) man page.
Perform this procedure on each global-cluster voting node that you want to configure to use IPsec.
phys-schost# ifconfig clprivnet0
Follow the instructions in How to Secure Traffic Between Two Systems With IPsec in Oracle Solaris Administration: IP Services. In addition, observe the following guidelines:
Ensure that the values of the configuration parameters for these addresses are consistent on all the partner nodes.
Configure each policy as a separate line in the configuration file.
To implement IPsec without rebooting, follow the instructions in the procedure's example, Securing Traffic With IPsec Without Rebooting.
For more information about the sa unique policy, see the ipsecconf(1M) man page.
Include the clprivnet IP address of the local node.
This feature helps the driver to optimally utilize the bandwidth of the cluster private network, which provides a high granularity of distribution and better throughput. The clprivnetinterface uses the Security Parameter Index (SPI) of the packet to stripe the traffic.
Add this entry to the policy rules that are configured for cluster transports. This setting provides the time for security associations to be regenerated when a cluster node reboots, and limits how quickly a rebooted node can rejoin the cluster. A value of 30 seconds should be adequate.
phys-schost# vi /etc/inet/ike/config … { label "clust-priv-interconnect1-clust-priv-interconnect2" … p2_idletime_secs 30 } …
Next Steps
Determine from the following list the next task to perform that applies to your cluster configuration. If you need to perform more than one task from this list, go to the first of those tasks in this list.
To configure Solaris Volume Manager software, go to Chapter 4, Configuring Solaris Volume Manager Software.
To create cluster file systems, go to How to Create Cluster File Systems.
To create non-global zones on a node, go to How to Create a Non-Global Zone on a Global-Cluster Node.
Install third-party applications, register resource types, set up resource groups, and configure data services. See the documentation that is supplied with the application software and the Oracle Solaris Cluster Data Services Planning and Administration Guide.
Otherwise, if you have completed all hardware and software installation and configuration tasks, validate the cluster. Go to How to Validate the Cluster.
After you complete all configuration of the cluster, use the cluster check command to validate the cluster configuration and functionality. For more information, see the cluster(1CL) man page.
Tip - For ease of future reference or troubleshooting, for each validation that you run, use the -o outputdir option to specify a subdirectory for log files. Reuse of an existing subdirectory name will remove all existing files in the subdirectory. Therefore, to ensure that log files are available for future reference, specify a unique subdirectory name for each cluster check that you run.
Before You Begin
Ensure that you have completed the installation and configuration of all hardware and software components in the cluster, including firmware and patches.
Go to the Patches & Updates tab of My Oracle Support. Using the Advanced Search, select “Solaris Cluster” as the Product and specify “check” in the Description field to locate Oracle Solaris Cluster patches that contain checks. Apply any patches that are not already installed on your cluster.
# cluster check -v -o outputdir
Verbose mode
Redirects output to the outputdir subdirectory.
The command runs all available basic checks. No cluster functionality is affected.
# cluster check -v -k interactive -o outputdir
Specifies running interactive validation checks
The command runs all available interactive checks and prompts you for needed information about the cluster. No cluster functionality is affected.
# cluster list-checks -k functional
For example, a functional check might trigger a node panic or a failover to another node.
# cluster list-checks -v -C checkID
Specifies a specific check.
# cluster check -v -k functional -C checkid -o outputdir
Specifies running functional validation checks
Respond to prompts from the check to confirm that the check should run, and for any information or actions you must perform.
Note - For record-keeping purposes, specify a unique outputdir subdirectory name for each check you run. If you reuse an outputdir name, output for the new check overwrites the existing contents of the reused outputdir subdirectory.
Example 3-5 Listing Interactive Validation Checks
The following example lists all interactive checks that are available to run on the cluster. Example output shows a sampling of possible checks; actual available checks vary for each configuration
# cluster list-checks -k interactive Some checks might take a few moments to run (use -v to see progress)... I6994574 : (Moderate) Fix for GLDv3 interfaces on cluster transport vulnerability applied?
Example 3-6 Running a Functional Validation Check
The following example first shows the verbose listing of functional checks. The verbose description is then listed for the check F6968101, which indicates that the check would disrupt cluster services. The cluster is taken out of production. The functional check is then run with verbose output logged to the funct.test.F6968101.12Jan2011 subdirectory. Example output shows a sampling of possible checks; actual available checks vary for each configuration.
# cluster list-checks -k functional F6968101 : (Critical) Perform resource group switchover F6984120 : (Critical) Induce cluster transport network failure - single adapter. F6984121 : (Critical) Perform cluster shutdown F6984140 : (Critical) Induce node panic … # cluster list-checks -v -C F6968101 F6968101: (Critical) Perform resource group switchover Keywords: SolarisCluster3.x, functional Applicability: Applicable if multi-node cluster running live. Check Logic: Select a resource group and destination node. Perform '/usr/cluster/bin/clresourcegroup switch' on specified resource group either to specified node or to all nodes in succession. Version: 1.2 Revision Date: 12/10/10 Take the cluster out of production # cluster check -k functional -C F6968101 -o funct.test.F6968101.12Jan2011 F6968101 initializing... initializing xml output... loading auxiliary data... starting check run... pschost1, pschost2, pschost3, pschost4: F6968101.... starting: Perform resource group switchover ============================================================ >>> Functional Check <<< 'Functional' checks exercise cluster behavior. It is recommended that you do not run this check on a cluster in production mode.' It is recommended that you have access to the system console for each cluster node and observe any output on the consoles while the check is executed. If the node running this check is brought down during execution the check must be rerun from this same node after it is rebooted into the cluster in order for the check to be completed. Select 'continue' for more details on this check. 1) continue 2) exit choice: 1 ============================================================ >>> Check Description <<< … Follow onscreen directions
Next Steps
Before you put the cluster into production, make a baseline recording of the cluster configuration for future diagnostic purposes. Go to How to Record Diagnostic Data of the Cluster Configuration.
After you finish configuring the global cluster but before you put it into production, use the Oracle Explorer utility to record baseline information about the cluster. This data can be used if there is a future need to troubleshoot the cluster.
The Services Tools Bundle contains the Oracle Explorer packages SUNWexplo and SUNWexplu. See http://www.oracle.com/us/support/systems/premier/services-tools-bundle-sun-systems-163717.html for software download and installation information.
Use the appropriate command for your platform. For example, to collect information on a Sun Fire T1000 server from Oracle, run the following command:
# explorer -i -w default,Tx000
For more information, see the explorer(1M) man page in the /opt/SUNWexplo/man/man1m/ directory and Oracle Explorer Data Collector User’s Guide which is available through Note 1153444.1 on My Oracle Support:
The explorer output file is saved in the /opt/SUNWexplo/output/ directory as explorer.hostid.hostname-date.tar.gz.
Follow the procedures in Oracle Explorer Data Collector User's Guide to use FTP or HTTPS to submit Oracle Explorer files.
The Oracle Explorer database makes your explorer output available to Oracle technical support if the data is needed to help diagnose a technical problem with your cluster.