Perform this procedure to configure a new global cluster by using an XML cluster configuration file. The new cluster can be a duplication of an existing cluster that runs the Oracle Solaris Cluster software.
This procedure configures the following cluster components:
Cluster name
Cluster node membership
Cluster interconnect
Before You Begin
Perform the following tasks:
Ensure that the Oracle Solaris OS is installed to support the Oracle Solaris Cluster software.
If the Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for the Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing the Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
Ensure that NWAM is disabled. See How to Install Oracle Solaris Cluster Software Packages for instructions.
SPARC: If you are configuring Oracle VM Server for SPARC logical domains as cluster nodes, ensure that the Oracle VM Server for SPARC software is installed on each physical machine and that the domains meet Oracle Solaris Cluster requirements. See How to Install Oracle VM Server for SPARC Software and Create Domains.
Ensure that any adapters that you want to use as tagged VLAN adapters are configured and that you have their VLAN IDs.
Ensure that Oracle Solaris Cluster 4.3 software and updates are installed on each node that you will configure. See How to Install Oracle Solaris Cluster Software Packages.
phys-schost# /usr/sbin/clinfo -n
clinfo: node is not configured as part of a cluster: Operation not applicable
This message indicates that the Oracle Solaris Cluster software is not yet configured on the potential node.
The return of a node ID indicates that the Oracle Solaris Cluster software is already configured on the node.
If the cluster is running an older version of Oracle Solaris Cluster software and you want to install Oracle Solaris Cluster 4.3 software, instead perform upgrade procedures in Oracle Solaris Cluster 4.3 Upgrade Guide.
If the Oracle Solaris Cluster software is not yet configured on any of the potential cluster nodes, proceed to Step 2.
The Oracle Solaris TCP wrappers for RPC feature prevents internode communication that is necessary for cluster configuration.
TCP wrappers are enabled if config/enable_tcpwrappers is set to true, as shown in the following example command output.
# svccfg -s rpc/bind listprop config/enable_tcpwrappers config/enable_tcpwrappers boolean true
# svccfg -s rpc/bind setprop config/enable_tcpwrappers = false # svcadm refresh rpc/bind # svcadm restart rpc/bind
Follow the procedures in the documentation for your switches to determine whether NDP is enabled and to disable NDP.
During cluster configuration, the software checks that there is no traffic on the private interconnect. If NDP sends any packages to a private adapter when the private interconnect is being checked for traffic, the software will assume that the interconnect is not private and cluster configuration will be interrupted. NDP must therefore be disabled during cluster creation.
After the cluster is established, you can re-enable NDP on the private-interconnect switches if you want to use that feature.
phys-schost# cluster export -o clconfigfile
Specifies the output destination.
The name of the cluster configuration XML file. The specified file name can be an existing file or a new file that the command will create.
For more information, see the cluster(1CL) man page.
You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.
Include or modify the values of the XML elements to reflect the cluster configuration that you want to create.
If you are duplicating an existing cluster, open the file that you created with the cluster export command.
If you are not duplicating an existing cluster, create a new file.
Base the file on the element hierarchy that is shown in the clconfiguration(5CL) man page. You can store the file in any directory that is accessible to the other hosts that you will configure as cluster nodes.
To establish a cluster, the following components must have valid values in the cluster configuration XML file:
Cluster name
Cluster nodes
Cluster transport
If you are modifying configuration information that was exported from an existing cluster, some values that you must change to reflect the new cluster, such as node names, are used in the definitions of more than one cluster object.
See the clconfiguration(5CL) man page for details about the structure and content of the cluster configuration XML file.
phys-schost# /usr/share/src/xmllint --valid --noout clconfigfile
See the xmllint(1) man page for more information.
This system is the control node.
phys-schost# clauth enable -n control-node
If you want to use the des (Diffie-Hellman) authentication protocol instead of the sys (unix) protocol, include –p des in the command.
phys-schost# clauth enable -p des -n control-node
For information about setting up DES authentication, see Administering Authentication With Secure RPC in Managing Kerberos and Other Authentication Services in Oracle Solaris 11.3.
phys-schost# cluster create -i clconfigfile
Specifies the name of the cluster configuration XML file to use as the input source.
If services are not yet online for a node, wait until the state changes to online before you proceed to the next step.
phys-schost# svcs multi-user-server node STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default
phys-schost# clnode status
Output resembles the following.
=== Cluster Nodes === --- Node Status --- Node Name Status --------- ------ phys-schost-1 Online phys-schost-2 Online phys-schost-3 Online
For more information, see the clnode(1CL) man page.
For instructions on updating your software, see Chapter 11, Updating Your Software in Oracle Solaris Cluster 4.3 System Administration Guide.
Without this addition to the /etc/hosts.allow file, TCP wrappers prevent internode communication over RPC for cluster administration utilities.
# /usr/sbin/ipadm show-addr ADDROBJ TYPE STATE ADDR clprivnet0/N static ok ip-address/netmask-length …
See How to Configure an IPv4 Interface in Configuring and Managing Network Components in Oracle Solaris 11.3 for more information about modifying the automounter map.
You must configure a quorum device if you created a two-node cluster. If you choose not to use the cluster configuration XML file to create a required quorum device, go instead to How to Configure Quorum Devices.
Follow instructions in How to Install and Configure Oracle Solaris Cluster Quorum Server Software.
See Oracle Solaris Cluster With Network-Attached Storage Device Manual.
phys-schost# xmllint --valid --noout clconfigfile
phys-schost# clquorum add -i clconfigfile device-name
Specifies the name of the device to configure as a quorum device.
phys-schost# clquorum reset
phys-schost# claccess deny-all
phys-schost# clnode set -p reboot_on_path_failure=enabled +
Specifies the property to set
Enables automatic node reboot if failure of all monitored shared-disk paths occurs.
phys-schost# clnode show === Cluster Nodes === Node Name: node … reboot_on_path_failure: enabled …
The following example duplicates the cluster configuration and quorum configuration of an existing two-node cluster to a new two-node cluster. The new cluster is installed with the Oracle Solaris OS. The cluster configuration is exported from the existing cluster node, phys-oldhost-1, to the cluster configuration XML file clusterconf.xml. The node names of the new cluster are phys-newhost-1 and phys-newhost-2. The device that is configured as a quorum device in the new cluster is d3.
The prompt name phys-newhost-N in this example indicates that the command is performed on both cluster nodes.
phys-newhost-N# /usr/sbin/clinfo -n clinfo: node is not configured as part of a cluster: Operation not applicable phys-oldhost-1# cluster export -o clusterconf.xml Copy clusterconf.xml to phys-newhost-1 and modify the file with valid values phys-newhost-1# xmllint --valid --noout clusterconf.xml No errors are reported phys-newhost-1# cluster create -i clusterconf.xml phys-newhost-N# svcs multi-user-server STATE STIME FMRI online 17:52:55 svc:/milestone/multi-user-server:default phys-newhost-1# clnode status Output shows that both nodes are online phys-newhost-1# clquorum add -i clusterconf.xml d3 phys-newhost-1# clquorum reset
After the cluster is fully established, you can duplicate the configuration of the other cluster components from the existing cluster. If you did not already do so, modify the values of the XML elements that you want to duplicate to reflect the cluster configuration you are adding the component to. For example, if you are duplicating resource groups, ensure that the resourcegroupNodeList entry contains the valid node names for the new cluster and not the node names from the cluster that you duplicated unless the node names are the same.
To duplicate a cluster component, run the export subcommand of the object-oriented command for the cluster component that you want to duplicate. For more information about the command syntax and options, see the man page for the cluster object that you want to duplicate.
The following describes a list of the cluster components that you can create from a cluster configuration XML file after the cluster is established. The list includes the man page for the command that you use to duplicate the component:
Device groups: Solaris Volume Manager: cldevicegroup(1CL)
For Solaris Volume Manager, first create the disk sets that you specify in the cluster configuration XML file.
Resource Group Manager components
Resources: clresource(1CL)
Shared address resources: clressharedaddress(1CL)
Logical hostname resources: clreslogicalhostname(1CL)
Resource types: clresourcetype(1CL)
Resource groups: clresourcegroup(1CL)
You can use the –a option of the clresource, clressharedaddress, or clreslogicalhostname command to also duplicate the resource type and resource group that are associated with the resource that you duplicate. Otherwise, you must first add the resource type and resource group to the cluster before you add the resource.
NAS devices: clnasdevice(1CL)
You must first set up the NAS device as described in the device's documentation.
SNMP hosts: clsnmphost(1CL)
The clsnmphost create -i command requires that you specify a user password file with the –f option.
SNMP users: clsnmpuser(1CL)
Thresholds for monitoring system resources on cluster objects: cltelemetryattribute(1CL)
Troubleshooting
Unsuccessful configuration – If one or more nodes cannot join the cluster, or if the wrong configuration information was specified, first attempt to perform this procedure again. If that does not correct the problem, perform the procedure How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems on each misconfigured node to remove it from the cluster configuration. You do not need to uninstall the Oracle Solaris Cluster software packages. Then perform this procedure again.
Next Steps
Go to How to Verify the Quorum Configuration and Installation Mode.