JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Software Installation Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Planning the Oracle Solaris Cluster Configuration

2.  Installing Software on Global-Cluster Nodes

Overview of Installing the Software

Installing the Software

How to Prepare for Cluster Software Installation

How to Install Oracle Solaris Software

How to Install pconsole Software on an Administrative Console

How to Install and Configure Oracle Solaris Cluster Quorum Server Software

How to Configure Internal Disk Mirroring

SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains

How to Install Oracle Solaris Cluster Framework and Data Service Software Packages

How to Install the Availability Suite Feature of Oracle Solaris 11

How to Set Up the Root Environment

How to Configure IP Filter

3.  Establishing the Global Cluster

4.  Configuring Solaris Volume Manager Software

5.  Creating a Cluster File System

6.  Creating Zone Clusters

7.  Uninstalling Software From the Cluster

Index

Installing the Software

This section provides information and procedures to install software on the cluster nodes.

How to Prepare for Cluster Software Installation

  1. Ensure that the combination of hardware and software that you choose for your cluster is currently a supported Oracle Solaris Cluster configuration.
  2. Read the following manuals for information that can help you plan your cluster configuration and prepare your installation strategy.
  3. Have available all related documentation, including third-party documents.

    The following is a partial list of products whose documentation you might need to reference during cluster installation:

    • Oracle Solaris OS

    • Solaris Volume Manager software

    • Third-party applications

  4. Plan your cluster configuration.

    Use the planning guidelines in Chapter 1, Planning the Oracle Solaris Cluster Configuration and in the Oracle Solaris Cluster Data Services Planning and Administration Guide to determine how to install and configure your cluster.


    Caution

    Caution - Plan your cluster installation completely. Identify requirements for all data services and third-party products before you begin Oracle Solaris and Oracle Solaris Cluster software installation. Failure to do so might result in installation errors that require you to completely reinstall the Oracle Solaris and Oracle Solaris Cluster software.


  5. Obtain all necessary updates for your cluster configuration.

    See Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration Guide for installation instructions.

Next Steps

How to Install Oracle Solaris Software

Use this procedure to install the Oracle Solaris OS on the following systems, as applicable to your cluster configuration:

Before You Begin

Perform the following tasks:

  1. Connect to the consoles of each node.
  2. Install the Oracle Solaris OS.

    Follow installation instructions in Installing Oracle Solaris 11.1 Systems.


    Note - You must install all nodes in a cluster with the same version of the Oracle Solaris OS.


    You can use any method that is normally used to install the Oracle Solaris software. During Oracle Solaris software installation, perform the following steps:

    1. (Cluster nodes) Choose Manual Layout to set up the file systems.
      • Specify a slice that is at least 20 Mbytes in size.
      • Create any other file system partitions that you need, as described in System Disk Partitions.
    2. (Cluster nodes) For ease of administration, set the same root password on each node.
  3. Ensure that the solaris publisher is valid.
    # pkg publisher
    PUBLISHER                           TYPE     STATUS   URI
    solaris                             origin   online   solaris-repository

    For information about setting the solaris publisher, see Adding and Updating Oracle Solaris 11.1 Software Packages.

  4. (Cluster nodes) If you will use role-based access control (RBAC) instead of the root role to access the cluster nodes, set up an RBAC role that provides authorization for all Oracle Solaris Cluster commands.

    This series of installation procedures requires the following Oracle Solaris Cluster RBAC authorizations if the user is not the root role:

    • solaris.cluster.modify

    • solaris.cluster.admin

    • solaris.cluster.read

    See Role-Based Access Control (Overview) in Oracle Solaris 11.1 Administration: Security Services for more information about using RBAC roles. See the Oracle Solaris Cluster man pages for the RBAC authorization that each Oracle Solaris Cluster subcommand requires.

  5. (Cluster nodes) If you are adding a node to an existing cluster, add mount points for cluster file systems to the new node.
    1. From the active cluster node, display the names of all cluster file systems.
      phys-schost-1# mount | grep global | egrep -v node@ | awk '{print $1}'
    2. On the new node, create a mount point for each cluster file system in the cluster.
      phys-schost-new# mkdir -p mountpoint

      For example, if the mount command returned the file system name /global/dg-schost-1, run mkdir -p /global/dg-schost-1 on the new node you are adding to the cluster.

  6. Install any required Oracle Solaris OS software updates and hardware-related firmware and updates.

    Include those updates for storage array support. Also download any needed firmware that is contained in the hardware updates.

    See Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration Guide for installation instructions.

  7. x86: (Cluster nodes) Set the default boot file.

    The setting of this value enables you to reboot the node if you are unable to access a login prompt.

    grub edit> kernel /platform/i86pc/kernel/amd64/unix -B $ZFS-BOOTFS -k

    For more information, see How to Boot a System With the Kernel Debugger Enabled (kmdb) in Booting and Shutting Down Oracle Solaris on x86 Platforms.

  8. (Cluster nodes) Update the /etc/inet/hosts file on each node with all public IP addresses that are used in the cluster.

    Perform this step regardless of whether you are using a naming service.


    Note - During establishment of a new cluster or new cluster node, the scinstall utility automatically adds the public IP address of each node that is being configured to the /etc/inet/hosts file.


  9. (Optional) (Cluster nodes) Configure public-network adapters in IPMP groups.

    If you do not want to use the multiple-adapter IPMP groups that the scinstall utility configures during cluster creation, configure custom IPMP groups as you would in a stand-alone system. See Chapter 6, Administering IPMP (Tasks), in Managing Oracle Solaris 11.1 Network Performance for details.

    During cluster creation, the scinstall utility configures each set of public-network adapters that use the same subnet and are not already configured in an IPMP group into a single multiple-adapter IPMP group. The scinstall utility ignores any existing IPMP groups.

  10. (Optional) (Cluster nodes) If the Oracle Solaris Cluster software is not already installed and you want to use Oracle Solaris I/O multipathing, enable multipathing on each node.

    Caution

    Caution - If the Oracle Solaris Cluster software is already installed, do not issue this command. Running the stmsboot command on an active cluster node might cause Oracle Solaris services to go into the maintenance state. Instead, follow instructions in the stmsboot(1M) man page for using the stmsboot command in an Oracle Solaris Cluster environment.


    phys-schost# /usr/sbin/stmsboot -e
    -e

    Enables Oracle Solaris I/O multipathing.

    See How to Enable Multipathing in Oracle Solaris 11.1 Administration: SAN Configuration and Multipathing and the stmsboot(1M) man page for more information.

Next Steps

If you want to use the pconsole utility, go to How to Install pconsole Software on an Administrative Console.

If you want to use a quorum server, go to How to Install and Configure Oracle Solaris Cluster Quorum Server Software.

If your cluster nodes support the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.

SPARC: If you want to install Oracle VM Server for SPARC, go to SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.

Otherwise, install the Oracle Solaris Cluster software on the cluster nodes.

See Also

See the Oracle Solaris Cluster System Administration Guide for procedures to perform dynamic reconfiguration tasks in an Oracle Solaris Cluster configuration.

How to Install pconsole Software on an Administrative Console


Note - You are not required to use an administrative console. If you do not use an administrative console, perform administrative tasks from one designated node in the cluster.

You cannot use this software to connect to Oracle VM Server for SPARC guest domains.


This procedure describes how to install the Parallel Console Access (pconsole) software on an administrative console. The pconsole utility is part of the Oracle Solaris 11 terminal/pconsole package.

The pconsole utility creates a host terminal window for each remote host that you specify on the command line. The utility also opens a central, or master, console window that you can use to send input to all nodes at one time. For additional information, see the pconsole(1) man page that is installed with the terminal/pconsole package.

You can use any desktop machine that runs a version of the Oracle Solaris OS that is supported by Oracle Solaris Cluster 4.1 software as an administrative console.

Before You Begin

Ensure that a supported version of the Oracle Solaris OS and any Oracle Solaris software updates are installed on the administrative console.

  1. Assume the root role on the administrative console.
  2. Ensure that the solaris and ha-cluster publishers are valid.
    # pkg publisher
    PUBLISHER                           TYPE     STATUS   URI
    solaris                             origin   online   solaris-repository
    ha-cluster                          origin   online   ha-cluster-repository

    For information about setting the solaris publisher, see Set the Publisher Origin to the File Repository URI in Copying and Creating Oracle Solaris 11.1 Package Repositories.

  3. Install the terminal/pconsole package.
    adminconsole# pkg install terminal/pconsole
  4. (Optional) Install the Oracle Solaris Cluster man page packages.
    adminconsole# pkg install pkgname

    Package Name
    Description
    ha-cluster/system/manual
    Oracle Solaris Cluster framework man pages
    ha-cluster/system/manual/data-services
    Oracle Solaris Cluster data service man pages
    ha-cluster/service/quorum-server/manual
    Oracle Solaris Cluster Quorum Server man pages
    ha-cluster/geo/manual
    Oracle Solaris Cluster Geographic Edition man pages

    When you install the Oracle Solaris Cluster man page packages on the administrative console, you can view them from the administrative console before you install Oracle Solaris Cluster software on the cluster nodes or on a quorum server.

  5. (Optional) For convenience, set the directory paths on the administrative console.
    1. If you installed the ha-cluster/system/manual/data-services package, ensure that the /opt/SUNWcluster/bin/ directory is in the PATH.
    2. If you installed any other man page package, ensure that the /usr/cluster/bin/ directory is in the PATH.
  6. Start the pconsole utility.

    Specify in the command each node that you want to connect to.

    adminconsole# pconsole host[:port] […] &

    See the procedures Logging Into the Cluster Remotely in Oracle Solaris Cluster System Administration Guide and How to Connect Securely to Cluster Consoles in Oracle Solaris Cluster System Administration Guide for additional information about how to use the pconsole utility. Also see the pconsole(1) man page that is installed as part of the Oracle Solaris 11 terminal/pconsole package.

Next Steps

If you want to use a quorum server, go to How to Install and Configure Oracle Solaris Cluster Quorum Server Software.

If your cluster nodes support the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.

SPARC: If you want to install Oracle VM Server for SPARC, go to SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.

Otherwise, install the Oracle Solaris Cluster software on the cluster nodes.

How to Install and Configure Oracle Solaris Cluster Quorum Server Software

Perform this procedure to configure a host server as a quorum server.

Before You Begin

Perform the following tasks:

  1. Assume the root role on the machine on which you want to install the Oracle Solaris Cluster Quorum Server software.
  2. Ensure that the solaris and ha-cluster publishers are valid.
    # pkg publisher
    PUBLISHER                           TYPE     STATUS   URI
    solaris                             origin   online   solaris-repository
    ha-cluster                          origin   online   ha-cluster-repository

    For information about setting the solaris publisher, see Set the Publisher Origin to the File Repository URI in Copying and Creating Oracle Solaris 11.1 Package Repositories.

  3. Install the Quorum Server group package.
    quorumserver# pkg install ha-cluster-quorum-server-full
  4. (Optional) Add the Oracle Solaris Cluster Quorum Server binary location to your PATH environment variable.
    quorumserver# PATH=$PATH:/usr/cluster/bin
  5. Configure the quorum server by adding the following entry to the /etc/scqsd/scqsd.conf file to specify configuration information about the quorum server.

    Identify the quorum server by specifying the port number and optionally the instance name.

    • If you provide an instance name, that name must be unique among your quorum servers.

    • If you do not provide an instance name, always refer to this quorum server by the port on which it listens.

    The format for the entry is as follows:

    /usr/cluster/lib/sc/scqsd [-d quorum-directory] [-i instance-name] -p port
    -d quorum-directory

    The path to the directory where the quorum server can store quorum data.

    The quorum server process creates one file per cluster in this directory to store cluster-specific quorum information.

    By default, the value of this option is /var/scqsd. This directory must be unique for each quorum server that you configure.

    -i instance-name

    A unique name that you choose for the quorum-server instance.

    -p port

    The port number on which the quorum server listens for requests from the cluster.

  6. (Optional) To serve more than one cluster but use a different port number or instance, configure an additional entry for each additional instance of the quorum server that you need.
  7. Save and close the /etc/scqsd/scqsd.conf file.
  8. Start the newly configured quorum server.
    quorumserver# /usr/cluster/bin/clquorumserver start quorum-server
    quorum-server

    Identifies the quorum server. You can use the port number on which the quorum server listens. If you provided an instance name in the configuration file, you can use that name instead.

    • To start a single quorum server, provide either the instance name or the port number.

    • To start all quorum servers when you have multiple quorum servers configured, use the + operand.

Troubleshooting

Oracle Solaris Cluster Quorum Server software consists of the following packages:

These packages are contained in the ha-cluster/group-package/ha-cluster-quorum-server-full and ha-cluster/group-package/ha-cluster-quorum-server-l10n group packages.

The installation of these packages adds software to the /usr/cluster and /etc/scqsd directories. You cannot modify the location of the Oracle Solaris Cluster Quorum Server software.

If you receive an installation error message regarding the Oracle Solaris Cluster Quorum Server software, verify that the packages were properly installed.

Next Steps

If your cluster nodes support the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.

SPARC: If you want to install Oracle VM Server for SPARC, go to SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.

Otherwise, install the Oracle Solaris Cluster software on the cluster nodes.

How to Configure Internal Disk Mirroring

Perform this procedure on each node of the global cluster to configure internal hardware RAID disk mirroring to mirror the system disk. This procedure is optional.


Note - Do not perform this procedure under either of the following circumstances:

Instead, perform Mirroring Internal Disks on Servers that Use Internal Hardware Disk Mirroring or Integrated Mirroring in Oracle Solaris Cluster 4.1 Hardware Administration Manual.


Before You Begin

Ensure that the Oracle Solaris operating system and any necessary updates are installed.

  1. Assume the root role.
  2. Configure an internal mirror.
    phys-schost# raidctl -c clt0d0 clt1d0 
    -c clt0d0 clt1d0

    Creates the mirror of primary disk to the mirror disk. Provide the name of your primary disk as the first argument and the name of the mirror disk as the second argument.

    For specifics about how to configure your server's internal disk mirroring, refer to the documents that shipped with your server and the raidctl(1M) man page.

Next Steps

SPARC: If you want to install Oracle VM Server for SPARC, go to SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains.

Otherwise, install the Oracle Solaris Cluster software on the cluster nodes.

SPARC: How to Install Oracle VM Server for SPARC Software and Create Domains

Perform this procedure to install Oracle VM Server for SPARC software on a physically clustered machine and to create I/O and guest domains.

Before You Begin

Perform the following tasks:

  1. Assume the root role on the machine.
  2. Install Oracle VM Server for SPARC software and configure domains by following the procedures in Chapter 2, Installing and Enabling Software, in Oracle VM Server for SPARC 2.1 Administration Guide.

    Observe the following special instructions:

    • If you create guest domains, adhere to the Oracle Solaris Cluster guidelines for creating guest domains in a cluster.

    • Use the mode=sc option for all virtual switch devices that connect the virtual network devices that are used as the cluster interconnect.

    • For shared storage, map only the full SCSI disks into the guest domains.

Next Steps

If your server supports the mirroring of internal hard drives and you want to configure internal disk mirroring, go to How to Configure Internal Disk Mirroring.

Otherwise, install the Oracle Solaris Cluster software packages. Go to How to Install Oracle Solaris Cluster Framework and Data Service Software Packages.

How to Install Oracle Solaris Cluster Framework and Data Service Software Packages

Follow this procedure to perform one or more of the following installation tasks:


Note - You cannot add or remove individual packages that are part of the ha-cluster-minimal group package except by complete reinstallation or uninstallation. See How to Unconfigure Oracle Solaris Cluster Software to Correct Installation Problems and How to Uninstall Oracle Solaris Cluster Software From a Cluster Node in Oracle Solaris Cluster System Administration Guide for procedures to remove the cluster framework packages.

However, you can add or remove other, optional packages without removing the ha-cluster-minimal group package.


Before You Begin

Perform the following tasks:

  1. If you are using a cluster administrative console, display a console screen for each node in the cluster.
    • If pconsole software is installed and configured on your administrative console, use the pconsole utility to display the individual console screens.

      As the root role, use the following command to start the pconsole utility:

      adminconsole# pconsole host[:port] […]  &

      The pconsole utility also opens a master window from which you can send your input to all individual console windows at the same time.

    • If you do not use the pconsole utility, connect to the consoles of each node individually.
  2. Restore external access to remote procedure call (RPC) communication.

    During the installation of the Oracle Solaris OS, a restricted network profile is used that disables external access for certain network services. The restricted services include the RPC communication service, which is required for cluster communication.

    Perform the following commands to restore external access to RPC communication.

    # svccfg
    svc:> select network/rpc/bind
    svc:/network/rpc/bind> setprop config/local_only=false
    svc:/network/rpc/bind> quit
    # svcadm refresh network/rpc/bind:default
    # svcprop network/rpc/bind:default | grep local_only

    The output of the last command should show that the local_only property is now set to false.

  3. Assume the root role on the cluster node to install.

    Alternatively, if your user account is assigned the System Administrator profile, issue commands as nonroot through a profile shell, or prefix the command with the pfexec command.

  4. Disable Network Auto-Magic (NWAM).

    NWAM activates a single network interface and disables all others. For this reason, NWAM cannot coexist with the Oracle Solaris Cluster software and you must disable it before you configure or run your cluster. To disable NWAM, you enable the defaultfixed profile.

    # netadm enable -p ncp defaultfixed
    # netadm list -p ncp defaultfixed
  5. Set up the repository for the Oracle Solaris Cluster software packages.
    • If the cluster nodes have direct access or web proxy access to the Internet, perform the following steps.
      1. Go to http://pkg-register.oracle.com.
      2. Choose Oracle Solaris Cluster software.
      3. Accept the license.
      4. Request a new certificate by choosing Oracle Solaris Cluster software and submitting a request.

        The certification page is displayed with download buttons for the key and the certificate.

      5. Download the key and certificate files and install them as described in the returned certification page.
      6. Configure the ha-cluster publisher with the downloaded SSL keys and set the location of the Oracle Solaris Cluster 4.1 repository.

        In the following example the repository name is https://pkg.oracle.com/repository-location/.

        # pkg set-publisher \
        -k /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem \
        -c /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem \
        -O https://pkg.oracle.com/repository-location/ ha-cluster
        -k /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem
        Specifies the full path to the downloaded SSL key file.
        -c /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem

        Specifies the full path to the downloaded certificate file.

        -O https://pkg.oracle.com/repository-location/

        Specifies the URL to the Oracle Solaris Cluster 4.1 package repository.

        For more information, see the pkg(1) man page.

    • If you are using an ISO image of the software, perform the following steps.
      1. Download the Oracle Solaris Cluster 4.1 ISO image from Oracle Software Delivery Cloud at http://edelivery.oracle.com/.

        Note - A valid Oracle license is required to access Oracle Software Delivery Cloud.


        Oracle Solaris Cluster software is part of the Oracle Solaris Product Pack. Follow online instructions to complete selection of the media pack and download the software.

      2. Make the Oracle Solaris Cluster 4.1 ISO image available.
        # lofiadm -a path-to-iso-image 
        /dev/lofi/N
        # mount -F hsfs /dev/lofi/N /mnt
        -a path-to-iso-image

        Specifies the full path and file name of the ISO image.

      3. Set the location of the Oracle Solaris Cluster 4.1 package repository.
        # pkg set-publisher -g file:///mnt/repo ha-cluster
  6. Ensure that the solaris and ha-cluster publishers are valid.
    # pkg publisher
    PUBLISHER                           TYPE     STATUS   URI
    solaris                             origin   online   solaris-repository
    ha-cluster                          origin   online   ha-cluster-repository

    For information about setting the solaris publisher, see Set the Publisher Origin to the File Repository URI in Copying and Creating Oracle Solaris 11.1 Package Repositories.

  7. Install the Oracle Solaris Cluster 4.1 software.
    # /usr/bin/pkg install package
  8. Verify that the package installed successfully.
    $ pkg info -r package

    Package installation succeeded if the state is Installed.

  9. Perform any necessary updates to the Oracle Solaris Cluster software.

    See Chapter 11, Updating Your Software, in Oracle Solaris Cluster System Administration Guide for installation instructions.

Next Steps

If you want to use the Availability Suite feature of Oracle Solaris software, install the Availability Suite software. Go to How to Install the Availability Suite Feature of Oracle Solaris 11.

Otherwise, to set up the root user environment, go to How to Set Up the Root Environment.

How to Install the Availability Suite Feature of Oracle Solaris 11

Before You Begin

Ensure that a minimum of Oracle Solaris 11 SRU 1 is installed.

  1. Assume the root role.
  2. Ensure that the solaris publishers is valid.
    # pkg publisher
    PUBLISHER                           TYPE     STATUS   URI
    solaris                             origin   online   solaris-repository

    For information about setting the solaris publisher, see Set the Publisher Origin to the File Repository URI in Copying and Creating Oracle Solaris 11.1 Package Repositories.

  3. Install the IPS package for the Availability Suite feature of the Oracle Solaris 11 software.
    # /usr/bin/pkg install storage/avs
  4. Configure the Availability Suite feature.

    For details, see Initial Configuration Settings in Sun StorageTek Availability Suite 4.0 Software Installation and Configuration Guide.

Next Steps

To set up the root user environment, go to How to Set Up the Root Environment.

How to Set Up the Root Environment


Note - In an Oracle Solaris Cluster configuration, user initialization files for the various shells must verify that they are run from an interactive shell. The files must verify this before they attempt to output to the terminal. Otherwise, unexpected behavior or interference with data services might occur. See Customizing a User’s Work Environment in Managing User Accounts and User Environments in Oracle Solaris 11.1 for more information.


Perform this procedure on each node in the global cluster.

  1. Assume the root role on a cluster node.
  2. Add /usr/cluster/bin/ and /usr/sbin/ to the PATH.

    Note - Always make /usr/cluster/bin the first entry in the PATH. This placement ensures that Oracle Solaris Cluster commands take precedence over any other binaries that have the same name, thus avoiding unexpected behavior.


    See your Oracle Solaris OS documentation, volume manager documentation, and other application documentation for additional file paths to set.

  3. (Optional) For ease of administration, set the same root password on each node, if you have not already done so.

Next Steps

If you want to use the IP Filter feature of Oracle Solaris, go to How to Configure IP Filter.

Otherwise, configure Oracle Solaris Cluster software on the cluster nodes. Go to Establishing a New Global Cluster or New Global-Cluster Node.

How to Configure IP Filter

Perform this procedure to configure the IP Filter feature of Oracle Solaris software on the global cluster.


Note - Only use IP Filter with failover data services. The use of IP Filter with scalable data services is not supported.


For more information about the IP Filter feature, see Chapter 4, IP Filter in Oracle Solaris (Overview), in Securing the Network in Oracle Solaris 11.1.

Before You Begin

Read the guidelines and restrictions to follow when you configure IP Filter in a cluster. See the “IP Filter” bullet item in Oracle Solaris OS Feature Restrictions.

  1. Assume the root role.
  2. Add filter rules to the /etc/ipf/ipf.conf file on all affected nodes.

    Observe the following guidelines and requirements when you add filter rules to Oracle Solaris Cluster nodes.

    • In the ipf.conf file on each node, add rules to explicitly allow cluster interconnect traffic to pass unfiltered. Rules that are not interface specific are applied to all interfaces, including cluster interconnects. Ensure that traffic on these interfaces is not blocked mistakenly. If interconnect traffic is blocked, the IP Filter configuration interferes with cluster handshakes and infrastructure operations.

      For example, suppose the following rules are currently used:

      # Default block TCP/UDP unless some later rule overrides
      block return-rst in proto tcp/udp from any to any
      
      # Default block ping unless some later rule overrides
      block return-rst in proto icmp all

      To unblock cluster interconnect traffic, add the following rules. The subnets used are for example only. Derive the subnets to use by using the ifconfig show-addr | grep interface command.

      # Unblock cluster traffic on 172.16.0.128/25 subnet (physical interconnect)
      pass in quick proto tcp/udp from 172.16.0.128/25 to any
      pass out quick proto tcp/udp from 172.16.0.128/25 to any
      
      # Unblock cluster traffic on 172.16.1.0/25 subnet (physical interconnect)
      pass in quick proto tcp/udp from 172.16.1.0/25 to any
      pass out quick proto tcp/udp from 172.16.1.0/25 to any
      
      # Unblock cluster traffic on 172.16.4.0/23 (clprivnet0 subnet)
      pass in quick proto tcp/udp from 172.16.4.0/23 to any
      pass out quick proto tcp/udp from 172.16.4.0/23 to any
    • You can specify either the adapter name or the IP address for a cluster private network. For example, the following rule specifies a cluster private network by its adapter's name:

      # Allow all traffic on cluster private networks.
      pass in quick on net1 all
      …
    • Oracle Solaris Cluster software fails over network addresses from node to node. No special procedure or code is needed at the time of failover.

    • All filtering rules that reference IP addresses of logical hostname and shared address resources must be identical on all cluster nodes.

    • Rules on a standby node will reference a nonexistent IP address. This rule is still part of the IP filter's active rule set and will become effective when the node receives the address after a failover.

    • All filtering rules must be the same for all NICs in the same IPMP group. In other words, if a rule is interface-specific, the same rule must also exist for all other interfaces in the same IPMP group.

    For more information about IP Filter rules, see the ipf(4) man page.

  3. Enable the ipfilter SMF service.
    phys-schost# svcadm enable /network/ipfilter:default

Next Steps

Configure Oracle Solaris Cluster software on the cluster nodes. Go to Establishing a New Global Cluster or New Global-Cluster Node.