Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster |
1. Planning the Oracle Solaris Cluster Configuration
2. Installing Software on Global-Cluster Nodes
How to Prepare for Cluster Software Installation
How to Install and Configure Quorum Server Software
How to Install Cluster Control Panel Software on an Administrative Console
How to Install Solaris Software
How to Configure Internal Disk Mirroring
SPARC: How to Install Sun Logical Domains Software and Create Domains
How to Install Veritas File System Software
How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages
How to Install Sun QFS Software
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Installing and Configuring Veritas Volume Manager
6. Creating a Cluster File System
7. Creating Non-Global Zones and Zone Clusters
8. Installing the Oracle Solaris Cluster Module to Sun Management Center
9. Uninstalling Software From the Cluster
A. Oracle Solaris Cluster Installation and Configuration Worksheets
This section provides information and procedures to install software on the cluster nodes.
The following task map lists the tasks that you perform to install software on multiple-host or single-host global clusters. Complete the procedures in the order that is indicated.
Table 2-1 Task Map: Installing the Software
|
Before you begin to install software, make the following preparations.
Contact your Oracle sales representative for the most current information about supported cluster configurations.
Oracle Solaris Cluster 3.3 5/11 Release Notes - Restrictions, bug workarounds, and other late-breaking information.
Oracle Solaris Cluster Concepts Guide - Overview of the Oracle Solaris Cluster product.
Oracle Solaris Cluster Software Installation Guide (this manual) - Planning guidelines and procedures for installing and configuring Solaris, Oracle Solaris Cluster, and volume-manager software.
Oracle Solaris Cluster Data Services Planning and Administration Guide - Planning guidelines and procedures to install and configure data services.
The following is a partial list of products whose documentation you might need to reference during cluster installation:
Solaris OS
Solaris Volume Manager software
Sun QFS software
Veritas Volume Manager
Third-party applications
Caution - Plan your cluster installation completely. Identify requirements for all data services and third-party products before you begin Solaris and Oracle Solaris Cluster software installation. Failure to do so might result in installation errors that require that you completely reinstall the Solaris and Oracle Solaris Cluster software. You must accommodate these requirements before you install Oracle Solaris Cluster software because you cannot change hostnames after you install Oracle Solaris Cluster software. |
Use the planning guidelines in Chapter 1, Planning the Oracle Solaris Cluster Configuration and in the Oracle Solaris Cluster Data Services Planning and Administration Guide to determine how to install and configure your cluster.
Fill out the cluster framework and data-services configuration worksheets that are referenced in the planning guidelines. Use your completed worksheets for reference during the installation and configuration tasks.
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 5/11 Release Notes for the location of patches and installation instructions.
Next Steps
If you want to use Cluster Control Panel software to connect from an administrative console to your cluster nodes, go to How to Install Cluster Control Panel Software on an Administrative Console.
Otherwise, choose the Solaris installation procedure to use.
To configure Oracle Solaris Cluster software by using the scinstall(1M) utility, go to How to Install Solaris Software to first install Solaris software.
To install and configure Solaris and Oracle Solaris Cluster software in the same operation (JumpStart method), go to How to Install Solaris and Oracle Solaris Cluster Software (JumpStart).
Perform this procedure to configure a host server as a quorum server.
Before You Begin
Perform the following tasks:
Ensure that the machine that you choose for the quorum server has at least 1 Mbyte of disk space available for Oracle Java Web Console software installation.
Ensure that the quorum-server machine is connected to a public network that is accessible to the cluster nodes.
Disable the spanning tree algorithm on the Ethernet switches for the ports that are connected to the cluster public network where the quorum server will run.
# xhost + # setenv DISPLAY nodename:0.0
If the volume management daemon (vold(1M)) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.
If you are installing the software packages on the SPARC platform, type the following command:
phys-schost# cd /cdrom/cdrom0Solaris_sparc
If you are installing the software packages on the x86 platform, type the following command:
phys-schost# cd /cdrom/cdrom0Solaris_x86
phys-schost# ./installer
Choose the Configure Later option.
Note - If the installer does not allow you to choose the Configure Later option, choose Configure Now.
After installation is finished, you can view any available installation log. See the Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for additional information about using the installer program.
phys-schost# eject cdrom
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 5/11 Release Notes for the location of patches and installation instructions.
quorumserver# PATH=$PATH:/usr/cluster/bin
quorumserver# MANPATH=$MANPATH:/usr/cluster/man
Add the following entry to the /etc/scqsd/scqsd.conf file to specify configuration information about the quorum server.
Identify the quorum server by using at least one of either an instance name or a port number. You must provide the port number, but the instance name is optional.
If you provide an instance name, that name must be unique among your quorum servers.
If you do not provide an instance name, always refer to this quorum server by the port on which it listens.
/usr/cluster/lib/sc/scqsd [-d quorumdirectory] [-i instancename] -p port
The path to the directory where the quorum server can store quorum data.
The quorum-server process creates one file per cluster in this directory to store cluster-specific quorum information.
By default, the value of this option is /var/scqsd. This directory must be unique for each quorum server that you configure.
A unique name that you choose for the quorum-server instance.
The port number on which the quorum server listens for requests from the cluster.
quorumserver# /usr/cluster/bin/clquorumserver start quorumserver
Identifies the quorum server. You can use the port number on which the quorum server listens. If you provided an instance name in the configuration file, you can use that name instead.
To start a single quorum server, provide either the instance name or the port number.
To start all quorum servers when you have multiple quorum servers configured, use the + operand.
Troubleshooting
The installer performs a simple pkgadd installation of the Quorum Server packages and sets up the necessary directories. The software consists of the following packages:
SUNWscqsr
SUNWscqsu
SUNWscqsman
The installation of these packages adds software to the /usr/cluster and /etc/scqsd directories. You cannot modify the location of the Quorum Server software.
If you receive an installation error message regarding the Quorum Server software, verify that the packages were properly installed.
Next Steps
If you want to use an administrative console to communicate with the cluster nodes, go to How to Install Cluster Control Panel Software on an Administrative Console.
Otherwise, go to How to Install Solaris Software.
Note - You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.
You cannot use this software to connect to Sun Logical Domains (LDoms) guest domains.
This procedure describes how to install the Cluster Control Panel (CCP) software on an administrative console. The CCP provides a single interface from which to start the cconsole, cssh, ctelnet, and crlogin tools. Each of these tools provides a multiple-window connection to a set of nodes, as well as a common window. You can use the common window to send input to all nodes at one time. For additional information, see the ccp(1M) man page.
You can use any desktop machine that runs a version of the Solaris OS that is supported by Oracle Solaris Cluster 3.3 5/11 software as an administrative console. If you are using Oracle Solaris Cluster software on a SPARC based system, you can also use the administrative console as a Sun Management Center console or server as well. See Sun Management Center documentation for information about how to install Sun Management Center software.
Before You Begin
Ensure that a supported version of the Solaris OS and any Solaris patches are installed on the administrative console. All platforms require at least the End User Solaris Software Group.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.
adminconsole# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
adminconsole# pkgadd -d . SUNWccon
adminconsole# pkgadd -d . pkgname …
|
When you install the Oracle Solaris Cluster man-page packages on the administrative console, you can view them from the administrative console before you install Oracle Solaris Cluster software on the cluster nodes or quorum server.
adminconsole# eject cdrom
Add your cluster name and the physical node name of each cluster node to the file.
adminconsole# vi /etc/clusters clustername node1 node2
See the /opt/SUNWcluster/bin/clusters(4) man page for details.
Add an entry for each node in the cluster to the file. Specify the physical node name, the hostname of the console-access device, and the port number. Examples of a console-access device are a terminal concentrator (TC), a System Service Processor (SSP), and a Sun Fire system controller.
adminconsole# vi /etc/serialports node1 ca-dev-hostname port node2 ca-dev-hostname port
Physical names of the cluster nodes.
Hostname of the console-access device.
Serial port number, or the Secure Shell port number for Secure Shell connections.
Note these special instructions to create an /etc/serialports file:
For a Sun Fire 15000 system controller, use telnet(1) port number 23 for the serial port number of each entry.
For all other console-access devices, to connect to the console through a telnet connection, use the telnet serial port number, not the physical port number. To determine the telnet serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the telnet serial port number is 5006.
For Sun Enterprise 10000 servers, also see the /opt/SUNWcluster/bin/serialports(4) man page for details and special considerations.
For Secure Shell connections to node consoles, specify for each node the name of the console-access device and the port number to use for secure connection. The default port number for Secure Shell is 22.
To connect the administrative console directly to the cluster nodes or through a management network, specify for each node its hostname and the port number that the node uses to connect to the administrative console or the management network.
adminconsole# /opt/SUNWcluster/bin/ccp &
Click the cconsole, cssh, crlogin, or ctelnet button in the CCP window to launch that tool. Alternately, you can start any of these tools directly. For example, to start ctelnet, type the following command:
adminconsole# /opt/SUNWcluster/bin/ctelnet &
The CCP software supports the following Secure Shell connections:
For secure connection to the node consoles, start the cconsole tool. Then from the Options menu of the Cluster Console window, enable the Use SSH check box.
For secure connection to the cluster nodes, use the cssh tool.
See the procedure “How to Remotely Log In to Oracle Solaris Cluster” in How to Log Into the Cluster Remotely in Oracle Solaris Cluster System Administration Guide for additional information about how to use the CCP utility. Also see the ccp(1M) man page.
Next Steps
Determine whether the Solaris OS is already installed to meet Oracle Solaris Cluster software requirements. See Planning the Oracle Solaris OS for information about Oracle Solaris Cluster installation requirements for the Solaris OS.
If the Solaris OS meets Oracle Solaris Cluster requirements, go to How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
If the Solaris OS does not meet Oracle Solaris Cluster requirements, install, reconfigure, or reinstall the Solaris OS as needed.
To install the Solaris OS alone, go to How to Install Solaris Software.
To use the scinstall custom JumpStart method to install both the Solaris OS and Oracle Solaris Cluster software, go to How to Install Solaris and Oracle Solaris Cluster Software (JumpStart)
If you do not use the scinstall custom JumpStart installation method to install software, perform this procedure to install the Solaris OS on each node in the global cluster. See How to Install Solaris and Oracle Solaris Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.
Tip - To speed installation, you can install the Solaris OS on each node at the same time.
If your nodes are already installed with the Solaris OS but do not meet Oracle Solaris Cluster installation requirements, you might need to reinstall the Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Oracle Solaris Cluster software. See Planning the Oracle Solaris OS for information about required root-disk partitioning and other Oracle Solaris Cluster installation requirements.
Before You Begin
Perform the following tasks:
Ensure that the hardware setup is complete and that connections are verified before you install Solaris software. See the Oracle Solaris Cluster Hardware Administration Collection and your server and storage device documentation for details.
Ensure that your cluster configuration planning is complete. See How to Prepare for Cluster Software Installation for requirements and guidelines.
Complete the Local File System Layout Worksheet.
If you use a naming service, add address-to-name mappings for all public hostnames and logical addresses to any naming services that clients use for access to cluster services. See Public-Network IP Addresses for planning guidelines. See your Solaris system-administrator documentation for information about using Solaris naming services.
As superuser, use the following command to start the cconsole utility:
adminconsole# /opt/SUNWcluster/bin/cconsole clustername &
The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.
Note - You must install all nodes in a cluster with the same version of the Solaris OS.
You can use any method that is normally used to install Solaris software. During Solaris software installation, perform the following steps:
Tip - To avoid the need to manually install Solaris software packages, install the Entire Solaris Software Group Plus OEM Support.
See Oracle Solaris Software Group Considerations for information about additional Solaris software requirements.
Note - Alternatively, do not create this dedicated file system and instead use a lofi device. You specify the use of a lofi device to the scinstall command when you establish the cluster.
This series of installation procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.
phys-schost-1# mount | grep global | egrep -v node@ | awk '{print $1}'
phys-schost-new# mkdir -p mountpoint
For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.
phys-schost# grep vxio /etc/name_to_major vxio NNN
|
phys-schost# pkgadd -G -d . package …
You must add these packages only to the global zone. The -G option adds packages to the current zone only. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.
Include those patches for storage-array support. Also download any needed firmware that is contained in the hardware patches.
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 5/11 Release Notes for the location of patches and installation instructions.
The setting of this value enables you to reboot the node if you are unable to access a login prompt.
grub edit> kernel /platform/i86pc/multiboot kmdb
Perform this step regardless of whether you are using a naming service.
Note - During establishment of a new cluster or new cluster node, the scinstall utility automatically adds the public IP address of each node that is being configured to the /etc/inet/hosts file.
Add the following entry to the /etc/system file on each node of the cluster:
set kernel_cage_enable=1
This entry becomes effective after the next system reboot. See your server documentation for more information about dynamic reconfiguration.
If you do not want to use the multiple-adapter IPMP groups that the scinstall utility configures during cluster creation, configure custom IPMP groups as you would in a stand-alone system. See Chapter 31, Administering IPMP (Tasks), in System Administration Guide: IP Services for details.
During cluster creation, the scinstall utility configures each set of public-network adapters that use the same subnet and are not already configured in an IPMP group into a single multiple-adapter IPMP group. The scinstall utility ignores any existing IPMP groups.
Caution - If Oracle Solaris Cluster software is already installed, do not issue this command. Running the stmsboot command on an active cluster node might cause Solaris services to go into the maintenance state. Instead, follow instructions in the stmsboot(1M) man page for using the stmsboot command in an Oracle Solaris Cluster environment. |
phys-schost# /usr/sbin/stmsboot -e
Enables Solaris I/O multipathing.
See the stmsboot(1M) man page for more information.
Next Steps
If your server supports the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.
Otherwise, to install VxFS, go to How to Install Veritas File System Software.
Otherwise, install the Oracle Solaris Cluster software packages. Go to How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
See Also
See the Oracle Solaris Cluster System Administration Guide for procedures to perform dynamic reconfiguration tasks in an Oracle Solaris Cluster configuration.
Perform this procedure on each node of the global cluster to configure internal hardware RAID disk mirroring to mirror the system disk. This procedure is optional.
Note - Do not perform this procedure under either of the following circumstances:
Your servers do not support the mirroring of internal hard drives.
You have already established the cluster. Instead, perform Mirroring Internal Disks on Servers that Use Internal Hardware Disk Mirroring or Integrated Mirroring in Oracle Solaris Cluster 3.3 Hardware Administration Manual.
Before You Begin
Ensure that the Solaris operating system and any necessary patches are installed.
phys-schost# raidctl -c clt0d0 clt1d0
Creates the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the first argument. Enter the name of the mirror disk as the second argument.
For specifics about how to configure your server's internal disk mirroring, refer to the documents that shipped with your server and the raidctl(1M) man page.
Next Steps
SPARC: To create Sun Logical Domains (LDoms), go to SPARC: How to Install Sun Logical Domains Software and Create Domains.
Otherwise, to install VxFS, go to How to Install Veritas File System Software.
Otherwise, install the Oracle Solaris Cluster software packages. Go to How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
Perform this procedure to install Sun Logical Domains (LDoms) software on a physically clustered machine and to create I/O and guest domains.
Before You Begin
Perform the following tasks:
Ensure that the machine is SPARC hypervisor capable.
Have available Logical Domains (LDoms) 1.0.3 Administration Guide and Logical Domains (LDoms) 1.0.3 Release Notes.
Read the requirements and guidelines in SPARC: Guidelines for Sun Logical Domains in a Cluster.
If you create guest domains, adhere to the Oracle Solaris Cluster guidelines for creating guest domains in a cluster.
Next Steps
If your server supports the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.
Otherwise, to install VxFS, go to How to Install Veritas File System Software.
Otherwise, install the Oracle Solaris Cluster software packages. Go to How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
To use Veritas File System (VxFS) software in the cluster, perform this procedure on each node of the global cluster.
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 5/11 Release Notes for the location of patches and installation instructions.
set rpcmod:svc_default_stksize=0x8000 set lwp_default_stksize=0x6000
These changes become effective at the next system reboot.
Oracle Solaris Cluster software requires a minimum rpcmod:svc_default_stksize setting of 0x8000. Because VxFS installation sets the value of the rpcmod:svc_default_stksize variable to 0x4000, you must manually set the value to 0x8000 after VxFS installation is complete.
You must set the lwp_default_stksize variable in the /etc/system file to override the VxFS default value of 0x4000.
Next Steps
Install the Oracle Solaris Cluster software packages. Go to How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
Follow this procedure to use the installer program to perform one or more of the following installation tasks:
To install the Oracle Solaris Cluster framework software packages on each node in the global cluster. These nodes can be physical machines or (SPARC only) Sun Logical Domains (LDoms) I/O domains or guest domains, or a combination of any of these types of nodes.
To install Oracle Solaris Cluster framework software on the master node where you will create a flash archive for a JumpStart installation. See How to Install Solaris and Oracle Solaris Cluster Software (JumpStart) for more information about a JumpStart installation of a global cluster.
To install data services.
Note - This procedure installs data services only to the global zone. To install data services to be visible only from within a certain non-global zone, see How to Create a Non-Global Zone on a Global-Cluster Node.
Note - This procedure uses the interactive form of the installer program. To use the noninteractive form of the installer program, such as when developing installation scripts, see Chapter 5, Installing in Silent Mode, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX.
Before You Begin
Perform the following tasks:
Ensure that the Solaris OS is installed to support Oracle Solaris Cluster software.
If Solaris software is already installed on the node, you must ensure that the Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Solaris Software for more information about installing Solaris software to meet Oracle Solaris Cluster software requirements.
Have available the DVD-ROM.
During the installation of the Solaris OS, a restricted network profile is used that disables external access for certain network services. The restricted services include the following services that affect cluster functionality:
The RPC communication service, which is required for cluster communication
The Oracle Java Web Console service, which is required to use the Oracle Solaris Cluster Manager GUI
The following steps restore Solaris functionality that is used by the Oracle Solaris Cluster framework but which is prevented if a restricted network profile is used.
phys-schost# svccfg svc:> select network/rpc/bind svc:/network/rpc/bind> setprop config/local_only=false svc:/network/rpc/bind> quit phys-schost# svcadm refresh network/rpc/bind:default phys-schost# svcprop network/rpc/bind:default | grep local_only
The output of the last command should show that the local_only property is now set to false.
phys-schost# svccfg svc:> select system/webconsole svc:/system/webconsole> setprop options/tcp_listen=true svc:/system/webconsole> quit phys-schost# /usr/sbin/smcwebserver restart phys-schost# netstat -a | grep 6789
The output of the last command should return an entry for 6789, which is the port number that is used to connect to Oracle Java Web Console.
For more information about what services the restricted network profile restricts to local connections, see Planning Network Security in Solaris 10 10/09 Installation Guide: Planning for Installation and Upgrade.
% xhost + % setenv DISPLAY nodename:0.0
If you do not make these settings, the installer program runs in text-based mode.
Note - If your physically clustered machines are configured with LDoms, install Oracle Solaris Cluster software only in I/O domains or guest domains.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.
If you are installing the software packages on the SPARC platform, type the following command:
phys-schost# cd /cdrom/cdrom0/Solaris_sparc
If you are installing the software packages on the x86 platform, type the following command:
phys-schost# cd /cdrom/cdrom0/Solaris_x86
phys-schost# ./installer
See the Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX for additional information about using the different forms and features of the installer program.
If you do not want to install Oracle Solaris Cluster Manager, formerly SunPlex Manager, deselect it.
Note - You must install Oracle Solaris Cluster Manager either on all nodes of the cluster or on none.
If you want to install Oracle Solaris Cluster Geographic Edition software, select it.
After the cluster is established, see Oracle Solaris Cluster Geographic Edition Installation Guide for further installation procedures.
Choose Configure Later when prompted whether to configure Oracle Solaris Cluster framework software.
After installation is finished, you can view any available installation log.
phys-schost# eject cdrom
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 5/11 Release Notes for the location of patches and installation instructions.
|
This entry becomes effective after the next system reboot.
Next Steps
If you want to install Sun QFS file system software, follow the procedures for initial installation. See How to Install Sun QFS Software.
Otherwise, to set up the root user environment, go to How to Set Up the Root Environment.
Perform this procedure on each node in the global cluster.
See How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
Follow procedures for initial installation in Installing Sun QFS.
Next Steps
Set up the root user environment. Go to How to Set Up the Root Environment.
Note - In an Oracle Solaris Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell. The files must verify this before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See Customizing a User’s Work Environment in System Administration Guide: Basic Administration for more information.
Perform this procedure on each node in the global cluster.
See your Solaris OS documentation, volume manager documentation, and other application documentation for additional file paths to set.
Next Steps
If you want to use Solaris IP Filter, go to How to Configure Solaris IP Filter.
Otherwise, configure Oracle Solaris Cluster software on the cluster nodes. Go to Establishing a New Global Cluster or New Global-Cluster Node.
Perform this procedure to configure Solaris IP Filter on the global cluster.
Note - Only use Solaris IP Filter with failover data services. The use of Solaris IP Filter with scalable data services is not supported.
For more information about the Solaris IP Filter feature, see Part IV, IP Security, in System Administration Guide: IP Services.
Before You Begin
Read the guidelines and restrictions to follow when you configure Solaris IP Filter in a cluster. See the “IP Filter” bullet item in Oracle Solaris OS Feature Restrictions.
Observe the following guidelines and requirements when you add filter rules to Oracle Solaris Cluster nodes.
In the ipf.conf file on each node, add rules to explicitly allow cluster interconnect traffic to pass unfiltered. Rules that are not interface specific are applied to all interfaces, including cluster interconnects. Ensure that traffic on these interfaces is not blocked mistakenly. If interconnect traffic is blocked, the IP Filter configuration interferes with cluster handshakes and infrastructure operations.
For example, suppose the following rules are currently used:
# Default block TCP/UDP unless some later rule overrides block return-rst in proto tcp/udp from any to any # Default block ping unless some later rule overrides block return-rst in proto icmp all
To unblock cluster interconnect traffic, add the following rules. The subnets used are for example only. Derive the subnets to use by using the ifconfig interface command.
# Unblock cluster traffic on 172.16.0.128/25 subnet (physical interconnect) pass in quick proto tcp/udp from 172.16.0.128/25 to any pass out quick proto tcp/udp from 172.16.0.128/25 to any # Unblock cluster traffic on 172.16.1.0/25 subnet (physical interconnect) pass in quick proto tcp/udp from 172.16.1.0/25 to any pass out quick proto tcp/udp from 172.16.1.0/25 to any # Unblock cluster traffic on 172.16.4.0/23 (clprivnet0 subnet) pass in quick proto tcp/udp from 172.16.4.0/23 to any pass out quick proto tcp/udp from 172.16.4.0/23 to any
You can specify either the adapter name or the IP address for a cluster private network. For example, the following rule specifies a cluster private network by its adapter's name:
# Allow all traffic on cluster private networks. pass in quick on e1000g1 all …
Oracle Solaris Cluster software fails over network addresses from node to node. No special procedure or code is needed at the time of failover.
All filtering rules that reference IP addresses of logical hostname and shared address resources must be identical on all cluster nodes.
Rules on a standby node will reference a non-existent IP address. This rule is still part of the IP filter's active rule set and will become effective when the node receives the address after a failover.
All filtering rules must be the same for all NICs in the same IPMP group. In other words, if a rule is interface-specific, the same rule must also exist for all other interfaces in the same IPMP group.
For more information about Solaris IP Filter rules, see the ipf(4) man page.
phys-schost# svcadm enable /network/ipfilter:default
Next Steps
Configure Oracle Solaris Cluster software on the cluster nodes. Go to Establishing a New Global Cluster or New Global-Cluster Node.