Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Software Installation Guide Oracle Solaris Cluster 3.3 3/13 |
1. Planning the Oracle Solaris Cluster Configuration
2. Installing Software on Global-Cluster Nodes
How to Prepare for Cluster Software Installation
How to Install and Configure Quorum Server Software
How to Install Cluster Control Panel Software on an Administrative Console
How to Install Oracle Solaris Software
How to Configure Internal Disk Mirroring
SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains
How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages
How to Install Sun QFS Software
3. Establishing the Global Cluster
4. Configuring Solaris Volume Manager Software
5. Creating a Cluster File System
6. Creating Non-Global Zones and Zone Clusters
This section provides information and procedures to install software on the cluster nodes.
The following task map lists the tasks that you perform to install software on multiple-host or single-host global clusters. Complete the procedures in the order that is indicated.
Table 2-1 Task Map: Installing the Software
|
Before you begin to install software, make the following preparations.
See Cluster Nodes in Oracle Solaris Cluster Concepts Guide for information about physical and virtual machines that are supported as cluster nodes.
Contact your Oracle sales representative for the most current information about supported cluster configurations.
Oracle Solaris Cluster 3.3 3/13 Release Notes - Restrictions, bug workarounds, and other late-breaking information.
Oracle Solaris Cluster Concepts Guide - Overview of the Oracle Solaris Cluster product.
Oracle Solaris Cluster Software Installation Guide (this manual) - Planning guidelines and procedures for installing and configuring Oracle Solaris, Oracle Solaris Cluster, and volume-manager software.
Oracle Solaris Cluster Data Services Planning and Administration Guide - Planning guidelines and procedures to install and configure data services.
The following is a partial list of products whose documentation you might need to reference during cluster installation:
Oracle Solaris OS
Solaris Volume Manager software
Sun QFS software
Third-party applications
Use the planning guidelines in Chapter 1, Planning the Oracle Solaris Cluster Configuration and in the Oracle Solaris Cluster Data Services Planning and Administration Guide to determine how to install and configure your cluster.
Caution - Plan your cluster installation completely. Identify requirements for all data services and third-party products before you begin Oracle Solaris and Oracle Solaris Cluster software installation. Failure to do so might result in installation errors that require that you completely reinstall the Oracle Solaris and Oracle Solaris Cluster software. You must accommodate these requirements before you install Oracle Solaris Cluster software because you cannot change hostnames after you install Oracle Solaris Cluster software. |
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 3/13 Release Notes for the location of patches and installation instructions.
Next Steps
If you want to install a machine as a quorum server to use as the quorum device in your cluster, go next to How to Install and Configure Quorum Server Software.
Otherwise, if you want to use Cluster Control Panel software to connect from an administrative console to your cluster nodes, go to How to Install Cluster Control Panel Software on an Administrative Console.
Otherwise, choose the Oracle Solaris installation procedure to use.
To configure Oracle Solaris Cluster software by using the scinstall(1M) utility, go to How to Install Oracle Solaris Software to first install Oracle Solaris software.
To install and configure both Oracle Solaris and Oracle Solaris Cluster software in the same operation (JumpStart method), go to How to Install Oracle Solaris and Oracle Solaris Cluster Software (JumpStart).
Perform this procedure to configure a host server as a quorum server.
Before You Begin
Perform the following tasks:
Ensure that the machine that you choose for the quorum server has at least 1 Mbyte of disk space available for Oracle Java Web Console software installation.
Ensure that the quorum-server machine is connected to a public network that is accessible to the cluster nodes.
Disable the spanning tree algorithm on the Ethernet switches for the ports that are connected to the cluster public network where the quorum server will run.
Use the following command if you want to ensure that the installer program can display the GUI.
# ssh -X [-l root] quorumserver
If the volume management daemon (vold(1M)) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.
If you are installing the software packages on the SPARC platform, type the following command:
phys-schost# cd /cdrom/cdrom0/Solaris_sparc
If you are installing the software packages on the x86 platform, type the following command:
phys-schost# cd /cdrom/cdrom0/Solaris_x86
phys-schost# ./installer
Choose the Configure Later option.
Note - If the installer does not allow you to choose the Configure Later option, choose Configure Now.
After installation is finished, you can view any available installation log. See the Sun Java Enterprise System 7 Installation and Upgrade Guide for additional information about using the installer program.
phys-schost# eject cdrom
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 3/13 Release Notes for the location of patches and installation instructions.
quorumserver# PATH=$PATH:/usr/cluster/bin
quorumserver# MANPATH=$MANPATH:/usr/cluster/man
Add the following entry to the /etc/scqsd/scqsd.conf file to specify configuration information about the quorum server.
Identify the quorum server by using at least one of either an instance name or a port number. You must provide the port number, but the instance name is optional.
If you provide an instance name, that name must be unique among your quorum servers.
If you do not provide an instance name, always refer to this quorum server by the port on which it listens.
/usr/cluster/lib/sc/scqsd [-d quorumdirectory] [-i instancename] -p port
The path to the directory where the quorum server can store quorum data.
The quorum-server process creates one file per cluster in this directory to store cluster-specific quorum information.
By default, the value of this option is /var/scqsd. This directory must be unique for each quorum server that you configure.
A unique name that you choose for the quorum-server instance.
The port number on which the quorum server listens for requests from the cluster.
quorumserver# /usr/cluster/bin/clquorumserver start quorumserver
Identifies the quorum server. You can use the port number on which the quorum server listens. If you provided an instance name in the configuration file, you can use that name instead.
To start a single quorum server, provide either the instance name or the port number.
To start all quorum servers when you have multiple quorum servers configured, use the + operand.
Troubleshooting
The installer performs a simple pkgadd installation of the Quorum Server packages and sets up the necessary directories. The software consists of the following packages:
SUNWscqsr
SUNWscqsu
SUNWscqsman
The installation of these packages adds software to the /usr/cluster and /etc/scqsd directories. You cannot modify the location of the Quorum Server software.
If you receive an installation error message regarding the Quorum Server software, verify that the packages were properly installed.
Next Steps
If you want to use an administrative console to communicate with the cluster nodes, go to How to Install Cluster Control Panel Software on an Administrative Console.
Otherwise, go to How to Install Oracle Solaris Software.
Note - You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.
You cannot use this software to connect to Oracle VM Server for SPARC guest domains.
This procedure describes how to install the Cluster Control Panel (CCP) software on an administrative console. The CCP provides a single interface from which to start the cconsole, cssh, ctelnet, and crlogin tools. Each of these tools provides a multiple-window connection to a set of nodes, as well as a common window. You can use the common window to send input to all nodes at one time. For additional information, see the ccp(1M) man page.
You can use any desktop machine that runs a version of the Oracle Solaris OS that is supported by Oracle Solaris Cluster 3.3 3/13 software as an administrative console.
Before You Begin
Ensure that a supported version of the Oracle Solaris OS and any Oracle Solaris patches are installed on the administrative console. All platforms require at least the End User Oracle Solaris Software Group.
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.
adminconsole# cd /cdrom/cdrom0/Solaris_arch/Product/sun_cluster/Solaris_ver/Packages/
adminconsole# pkgadd -d . SUNWccon
adminconsole# pkgadd -d . pkgname …
|
When you install the Oracle Solaris Cluster man-page packages on the administrative console, you can view them from the administrative console before you install Oracle Solaris Cluster software on the cluster nodes or quorum server.
adminconsole# eject cdrom
Add your cluster name and the physical node name of each cluster node to the file.
adminconsole# vi /etc/clusters clustername node1 node2
See the /opt/SUNWcluster/bin/clusters(4) man page for details.
Add an entry for each node in the cluster to the file. Specify the physical node name, the hostname of the console-access device, and the port number. Examples of a console-access device are a terminal concentrator (TC) and a Sun Fire system controller.
adminconsole# vi /etc/serialports node1 ca-dev-hostname port node2 ca-dev-hostname port
Physical names of the cluster nodes.
Hostname of the console-access device.
Serial port number, or the Secure Shell port number for Secure Shell connections.
Note these special instructions to create an /etc/serialports file:
For a Sun Fire 15000 system controller, use telnet(1) port number 23 for the serial port number of each entry.
For all other console-access devices, to connect to the console through a telnet connection, use the telnet serial port number, not the physical port number. To determine the telnet serial port number, add 5000 to the physical port number. For example, if a physical port number is 6, the telnet serial port number is 5006.
For Secure Shell connections to node consoles, specify for each node the name of the console-access device and the port number to use for secure connection. The default port number for Secure Shell is 22.
To connect the administrative console directly to the cluster nodes or through a management network, specify for each node its hostname and the port number that the node uses to connect to the administrative console or the management network.
adminconsole# /opt/SUNWcluster/bin/ccp &
Click the cconsole, cssh, crlogin, or ctelnet button in the CCP window to launch that tool. Alternately, you can start any of these tools directly. For example, to start ctelnet, type the following command:
adminconsole# /opt/SUNWcluster/bin/ctelnet &
The CCP software supports the following Secure Shell connections:
For secure connection to the node consoles, start the cconsole tool. Then from the Options menu of the Cluster Console window, enable the Use SSH check box.
For secure connection to the cluster nodes, use the cssh tool.
See the procedure How to Log Into the Cluster Remotely in Oracle Solaris Cluster System Administration Guide for additional information about how to use the CCP utility. Also see the ccp(1M) man page.
Next Steps
Determine whether the Oracle Solaris OS is already installed to meet Oracle Solaris Cluster software requirements. See Planning the Oracle Solaris OS for information about Oracle Solaris Cluster installation requirements for the Oracle Solaris OS.
If the Oracle Solaris OS meets Oracle Solaris Cluster requirements, go to How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
If the Oracle Solaris OS does not meet Oracle Solaris Cluster requirements, install, reconfigure, or reinstall the Oracle Solaris OS as needed.
To install the Oracle Solaris OS alone, go to How to Install Oracle Solaris Software.
To use the scinstall custom JumpStart method to install both the Oracle Solaris OS and Oracle Solaris Cluster software, go to How to Install Oracle Solaris and Oracle Solaris Cluster Software (JumpStart)
If you do not use the scinstall custom JumpStart installation method to install software, perform this procedure to install the Oracle Solaris OS on each node in the global cluster. See How to Install Oracle Solaris and Oracle Solaris Cluster Software (JumpStart) for more information about JumpStart installation of a cluster.
Tip - To speed installation, you can install the Oracle Solaris OS on each node at the same time.
If your nodes are already installed with the Oracle Solaris OS but do not meet Oracle Solaris Cluster installation requirements, you might need to reinstall the Oracle Solaris software. Follow the steps in this procedure to ensure subsequent successful installation of Oracle Solaris Cluster software. See Planning the Oracle Solaris OS for information about required root-disk partitioning and other Oracle Solaris Cluster installation requirements.
Before You Begin
Perform the following tasks:
Ensure that the hardware setup is complete and that connections are verified before you install Oracle Solaris software. See the Oracle Solaris Cluster 3.3 3/13 Hardware Administration Manual and your server and storage device documentation for details.
Ensure that your cluster configuration planning is complete. See How to Prepare for Cluster Software Installation for requirements and guidelines.
If you use a naming service, add address-to-name mappings for all public hostnames and logical addresses to any naming services that clients use for access to cluster services. See Public-Network IP Addresses for planning guidelines. See your Oracle Solaris system-administrator documentation for information about using Oracle Solaris naming services.
As superuser, use the following command to start the cconsole utility:
adminconsole# /opt/SUNWcluster/bin/cconsole clustername &
The cconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.
Note - You must install all nodes in a cluster with the same version of the Oracle Solaris OS.
You can use any method that is normally used to install Oracle Solaris software. During Oracle Solaris software installation, perform the following steps:
Tip - To avoid the need to manually install Oracle Solaris software packages, install the Entire Oracle Solaris Software Group Plus OEM Support.
See Oracle Solaris Software Group Considerations for information about additional Oracle Solaris software requirements.
Note - Do not create this file system if you plan to use a lofi device, which is the default. You specify the use of a lofi device to the scinstall command when you establish the cluster.
This series of installation procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not superuser:
solaris.cluster.modify
solaris.cluster.admin
solaris.cluster.read
See Role-Based Access Control (Overview) in System Administration Guide: Security Services for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.
phys-schost-1# mount | grep global | egrep -v node@ | awk '{print $1}'
phys-schost-new# mkdir -p mountpoint
For example, if the mount command returned the file-system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.
|
phys-schost# pkgadd -G -d . package …
You must add these packages only to the global zone. The -G option adds packages to the current zone only. This option also specifies that the packages are not propagated to any existing non-global zone or to any non-global zone that is created later.
Include those patches for storage-array support. Also download any needed firmware that is contained in the hardware patches.
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 3/13 Release Notes for the location of patches and installation instructions.
The setting of this value enables you to reboot the node if you are unable to access a login prompt.
grub edit> kernel /platform/i86pc/multiboot kmdb
Perform this step regardless of whether you are using a naming service.
Note - During establishment of a new cluster or new cluster node, the scinstall utility automatically adds the public IP address of each node that is being configured to the /etc/inet/hosts file.
If you do not want to use the multiple-adapter IPMP groups that the scinstall utility configures during cluster creation, configure custom IPMP groups as you would in a stand-alone system. See Chapter 28, Administering IPMP (Tasks), in Oracle Solaris Administration: IP Services for details.
During cluster creation, the scinstall utility configures each set of public-network adapters that use the same subnet and are not already configured in an IPMP group into a single multiple-adapter IPMP group. The scinstall utility ignores any existing IPMP groups.
Caution - If Oracle Solaris Cluster software is already installed, do not issue this command. Running the stmsboot command on an active cluster node might cause Oracle Solaris services to go into the maintenance state. Instead, follow instructions in the stmsboot(1M) man page for using the stmsboot command in an Oracle Solaris Cluster environment. |
phys-schost# /usr/sbin/stmsboot -e
Enables Oracle Solaris I/O multipathing.
See the stmsboot(1M) man page for more information.
Next Steps
If your server supports the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.
Otherwise, install the Oracle Solaris Cluster software packages. Go to How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
See Also
See the Oracle Solaris Cluster System Administration Guide for procedures to perform dynamic reconfiguration tasks in an Oracle Solaris Cluster configuration.
Perform this procedure on each node of the global cluster to configure internal hardware RAID disk mirroring to mirror the system disk. This procedure is optional.
Note - Do not perform this procedure under either of the following circumstances:
Your servers do not support the mirroring of internal hard drives.
You have already established the cluster. Instead, perform Mirroring Internal Disks on Servers that Use Internal Hardware Disk Mirroring or Integrated Mirroring in Oracle Solaris Cluster 3.3 3/13 Hardware Administration Manual.
Before You Begin
Ensure that the Oracle Solaris operating system and any necessary patches are installed.
phys-schost# raidctl -c clt0d0 clt1d0
Creates the mirror of primary disk to the mirror disk. Enter the name of your primary disk as the first argument. Enter the name of the mirror disk as the second argument.
For specifics about how to configure your server's internal disk mirroring, refer to the documents that shipped with your server and the raidctl(1M) man page.
Next Steps
SPARC: To create Oracle VM Server for SPARC, go to SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.
Otherwise, install the Oracle Solaris Cluster software packages. Go to How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
Perform this procedure to install Oracle VM Server for SPARC software on a physically clustered machine and to create I/O and guest domains.
Before You Begin
Perform the following tasks:
Ensure that the machine is SPARC hypervisor capable.
Have available Logical Domains (LDoms) 1.0.3 Administration Guide and Logical Domains (LDoms) 1.0.3 Release Notes.
Read the requirements and guidelines in SPARC: Guidelines for Oracle VM Server for SPARC in a Cluster.
If you create guest domains, adhere to the Oracle Solaris Cluster guidelines for creating guest domains in a cluster.
Next Steps
If your server supports the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.
Otherwise, install the Oracle Solaris Cluster software packages. Go to How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
Follow this procedure to use the installer program to perform one or more of the following installation tasks:
To install the Oracle Solaris Cluster framework software packages on each node in the global cluster. These nodes can be physical machines or (SPARC only) Oracle VM Server for SPARC I/O domains or guest domains, or a combination of any of these types of nodes.
Note - If your physically clustered machines are configured with Oracle VM Server for SPARC, install Oracle Solaris Cluster software only in I/O domains or guest domains.
To install Oracle Solaris Cluster framework software on the master node where you will create a flash archive for a JumpStart installation. See How to Install Oracle Solaris and Oracle Solaris Cluster Software (JumpStart) for more information about a JumpStart installation of a global cluster.
To install data services.
Note - This procedure installs data services only to the global zone. To install data services to be visible only from within a certain non-global zone, see How to Create a Non-Global Zone on a Global-Cluster Node.
Note - This procedure uses the interactive form of the installer program. To use the noninteractive form of the installer program, such as when developing installation scripts, see Chapter 5, Installing in Silent Mode, in Sun Java Enterprise System 5 Update 1 Installation Guide for UNIX.
Before You Begin
Perform the following tasks:
Ensure that the Oracle Solaris OS is installed to support Oracle Solaris Cluster software.
If Oracle Solaris software is already installed on the node, you must ensure that the Oracle Solaris installation meets the requirements for Oracle Solaris Cluster software and any other software that you intend to install on the cluster. See How to Install Oracle Solaris Software for more information about installing Oracle Solaris software to meet Oracle Solaris Cluster software requirements.
Have available the DVD-ROM.
During the installation of the Oracle Solaris OS, a restricted network profile is used that disables external access for certain network services. The restricted services include the following services that affect cluster functionality:
The RPC communication service, which is required for cluster communication
The Oracle Java Web Console service, which is required to use the Oracle Solaris Cluster Manager GUI
The following steps restore Oracle Solaris functionality that is used by the Oracle Solaris Cluster framework but which is prevented if a restricted network profile is used.
phys-schost# svccfg svc:> select network/rpc/bind svc:/network/rpc/bind> setprop config/local_only=false svc:/network/rpc/bind> quit phys-schost# svcadm refresh network/rpc/bind:default phys-schost# svcprop network/rpc/bind:default | grep local_only
The output of the last command should show that the local_only property is now set to false.
phys-schost# svccfg svc:> select system/webconsole svc:/system/webconsole> setprop options/tcp_listen=true svc:/system/webconsole> quit phys-schost# /usr/sbin/smcwebserver restart phys-schost# netstat -a | grep 6789
The output of the last command should return an entry for 6789, which is the port number that is used to connect to Oracle Java Web Console.
For more information about what services the restricted network profile restricts to local connections, see Planning Network Security in Oracle Solaris 10 1/13 Installation Guide: Planning for Installation and Upgrade.
Use the following command if you want to ensure that the installer program can display the GUI.
# ssh -X [-l root] nodename
If the volume management daemon vold(1M) is running and is configured to manage CD-ROM or DVD devices, the daemon automatically mounts the media on the /cdrom/cdrom0 directory.
If you are installing the software packages on the SPARC platform, type the following command:
phys-schost# cd /cdrom/cdrom0/Solaris_sparc
If you are installing the software packages on the x86 platform, type the following command:
phys-schost# cd /cdrom/cdrom0/Solaris_x86
phys-schost# ./installer
See the Sun Java Enterprise System 7 Installation and Upgrade Guide for additional information about using the different forms and features of the installer program.
If you do not want to install Oracle Solaris Cluster Manager, formerly SunPlex Manager, deselect it.
Note - You must install Oracle Solaris Cluster Manager either on all nodes of the cluster or on none.
If you want to install Oracle Solaris Cluster Geographic Edition software, select it.
After the cluster is established, see Oracle Solaris Cluster Geographic Edition Installation Guide for further installation procedures.
Choose Configure Later when prompted whether to configure Oracle Solaris Cluster framework software.
After installation is finished, you can view any available installation log.
phys-schost# eject cdrom
See Patches and Required Firmware Levels in Oracle Solaris Cluster 3.3 3/13 Release Notes for the location of patches and installation instructions.
|
This entry becomes effective after the next system reboot.
Next Steps
If you want to install Sun QFS file system software, follow the procedures for initial installation. See How to Install Sun QFS Software.
Otherwise, to set up the root user environment, go to How to Set Up the Root Environment.
Perform this procedure on each node in the global cluster.
See How to Install Oracle Solaris Cluster Framework and Data-Service Software Packages.
Follow procedures for initial installation in your Sun QFS documentation.
Next Steps
Set up the root user environment. Go to How to Set Up the Root Environment.
Note - In an Oracle Solaris Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell. The files must verify this before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See Customizing a User’s Work Environment in Oracle Solaris Administration: Basic Administration for more information.
Perform this procedure on each node in the global cluster.
See your Oracle Solaris OS documentation, volume manager documentation, and other application documentation for additional file paths to set.
Next Steps
If you want to use the IP Filter feature of Oracle Solaris, go to How to Configure IP Filter.
Otherwise, configure Oracle Solaris Cluster software on the cluster nodes. Go to Establishing a New Global Cluster or New Global-Cluster Node.
Perform this procedure to configure the IP Filter feature of Oracle Solaris on the global cluster.
Note - Only use IP Filter with failover data services. The use of IP Filter with scalable data services is not supported.
For more information about the IP Filter feature, see Part IV, IP Security, in Oracle Solaris Administration: IP Services.
Before You Begin
Read the guidelines and restrictions to follow when you configure IP Filter in a cluster. See the “IP Filter” bullet item in Oracle Solaris OS Feature Restrictions.
Observe the following guidelines and requirements when you add filter rules to Oracle Solaris Cluster nodes.
In the ipf.conf file on each node, add rules to explicitly allow cluster interconnect traffic to pass unfiltered. Rules that are not interface specific are applied to all interfaces, including cluster interconnects. Ensure that traffic on these interfaces is not blocked mistakenly. If interconnect traffic is blocked, the IP Filter configuration interferes with cluster handshakes and infrastructure operations.
For example, suppose the following rules are currently used:
# Default block TCP/UDP unless some later rule overrides block return-rst in proto tcp/udp from any to any # Default block ping unless some later rule overrides block return-rst in proto icmp all
To unblock cluster interconnect traffic, add the following rules. The subnets used are for example only. Derive the subnets to use by using the ifconfig interface command.
# Unblock cluster traffic on 172.16.0.128/25 subnet (physical interconnect) pass in quick proto tcp/udp from 172.16.0.128/25 to any pass out quick proto tcp/udp from 172.16.0.128/25 to any # Unblock cluster traffic on 172.16.1.0/25 subnet (physical interconnect) pass in quick proto tcp/udp from 172.16.1.0/25 to any pass out quick proto tcp/udp from 172.16.1.0/25 to any # Unblock cluster traffic on 172.16.4.0/23 (clprivnet0 subnet) pass in quick proto tcp/udp from 172.16.4.0/23 to any pass out quick proto tcp/udp from 172.16.4.0/23 to any
You can specify either the adapter name or the IP address for a cluster private network. For example, the following rule specifies a cluster private network by its adapter's name:
# Allow all traffic on cluster private networks. pass in quick on e1000g1 all …
Oracle Solaris Cluster software fails over network addresses from node to node. No special procedure or code is needed at the time of failover.
All filtering rules that reference IP addresses of logical hostname and shared address resources must be identical on all cluster nodes.
Rules on a standby node will reference a nonexistent IP address. This rule is still part of the IP filter's active rule set and will become effective when the node receives the address after a failover.
All filtering rules must be the same for all NICs in the same IPMP group. In other words, if a rule is interface-specific, the same rule must also exist for all other interfaces in the same IPMP group.
For more information about IP Filter rules, see the ipf(4) man page.
phys-schost# svcadm enable /network/ipfilter:default
Next Steps
Configure Oracle Solaris Cluster software on the cluster nodes. Go to Establishing a New Global Cluster or New Global-Cluster Node.