How to Configure Packet Filter

Read the guidelines and restrictions to follow when you configure the Packet Filter (PF) feature in a cluster. See the "Packet Filter (PF) Feature" bullet item in Oracle Solaris OS Feature Requirements and Restrictions.

Perform this procedure to configure the Packet Filter (PF) feature of Oracle Solaris software on the global cluster.

Note:

Only use PF with failover data services. The use of PF with scalable data services is not supported.

For more information about the PF feature, see Oracle Solaris Firewall in Securing the Network in Oracle Solaris 11.4.

  1. Assume the root role.
  2. Add filter rules to the /etc/firewall/pf.conf file on all affected nodes.

    Observe the following guidelines and requirements when you add filter rules to Oracle Solaris Cluster nodes.

    • In the pf.conf file on each node, add rules to explicitly allow cluster interconnect traffic to pass unfiltered. Rules that are not interface specific are applied to all interfaces, including cluster interconnects. Ensure that traffic on these interfaces is not blocked mistakenly. If interconnect traffic is blocked, the PF configuration will interfere with cluster membership and infrastructure operations.

      For example, suppose the following rules are currently used:

      # Default block { tcp, udp } unless some later rule overrides
      block return in proto { tcp, udp } from any to any
      
      # Default block ping unless some later rule overrides
      block return-icmp in proto icmp all
      
      # Allow traffic on localhost
      pass in quick to localhost
      pass out quick from localhost

      To unblock cluster interconnect traffic, add the following rules to the beginning of the pf.conf file. The subnets used are for example only. Derive the subnets to use by using the ipadm show-addr | grep interface command.

      # clintr status
      === Cluster Transport Paths ===
      
          Endpoint1          Endpoint2          Status
          ---------          ---------          ------
          node1:net1         node2:net1         Path online
          node1:net2         node2:net2         Path online 
      # ipadm show-addr | egrep "net1|net2|clprivnet"
      net1/?            static   ok           172.16.0.65/26
      net2/?            static   ok           172.16.0.129/26
      clprivnet0/?      static   ok           172.16.2.1/24

      The first interconnect net1 adapter is on subnet 172.16.0.64/26

      The second interconnect net2 adapter is on subnet 172.16.0.128/26.

      The private network interface clprivnet0 is on subnet 172.16.2.0/24.

      The PF rules corresponding to networks derived are:

      # Unblock cluster traffic on 172.16.0.64/26 subnet (physical interconnect)
      pass in quick proto { tcp, udp } from 172.16.0.64/26 to any flags any
      pass out quick proto { tcp, udp } from 172.16.0.64/26 to any flags any
      
      # Unblock cluster traffic on 172.16.0.128/26 subnet (physical interconnect)
      pass in quick proto { tcp, udp } from 172.16.0.128/26 to any flags any
      pass out quick proto { tcp, udp } from 172.16.0.128/26 to any flags any
      
      # Unblock cluster traffic on 172.16.2.0/24 (clprivnet0 subnet)
      pass in quick proto { tcp, udp } from 172.16.2.0/24 to any flags any
      pass out quick proto { tcp, udp } from 172.16.2.0/24 to any flags any
    • You can specify either the adapter name or the IP address for a cluster private network. For example, the following rule specifies a cluster private network by its adapter's name:

      # Allow all traffic on cluster private networks.
      pass in quick on net1 all flags any
      pass in quick on net2 all flags any
      pass in quick on clprivnet0 all flags any
    • Oracle Solaris Cluster software fails over network addresses from node to node. No special procedure or code is needed at the time of failover.

    • All filtering rules that reference IP addresses of logical hostname and shared address resources must be identical on all cluster nodes.

    • Rules on a standby node will reference a nonexistent IP address. This rule is still part of the PF's active rule set and will become effective when the node receives the address after a failover.

    • All filtering rules must be the same for all NICs in the same IPMP group. In other words, if a rule is interface-specific, the same rule must also exist for all other interfaces in the same IPMP group.

    • All cluster specific filtering rules must be added prior to configuring Oracle Solaris Cluster if PF has already been enabled on the cluster nodes.

    • Open all ports that are used by your Oracle Solaris Cluster configuration. For instance, if you run the Oracle Solaris Cluster disaster recovery framework, ensure that you open all ports that are used by the framework. See the ports described in Configuring Firewalls in Installing and Configuring the Disaster Recovery Framework for Oracle Solaris Cluster 4.4.

    • When you enable a firewall inside of an exclusive IP zone cluster, perform this procedure inside of each zone cluster node.

    • A shared IP zone cluster shares the private interconnect with the global zone. Add the clprivnet private network interface that is used in the shared IP zone cluster node to the packet filter rules on the global zone.

    For more information about PF rules, see the pf.conf(7) man page.

Example 2-1 Using an Exclusive IP Zone Cluster

This example shows how to configure an exclusive IP zone cluster.

# clnode status -m
--- Node Public Network Status ---

Node Name      PNM Object Name   Status   Adapter         Status
---------      ---------------   ------   -------         ------
node1          sc_ipmp0          Online   scld02zc2pub1   Online
node2          sc_ipmp0          Online   scld02zc2pub1   Online

# clintr status
=== Cluster Transport Paths ===

Endpoint1              Endpoint2              Status
---------              ---------              ------
node2:scld02zc2priv2   node1:scld02zc2priv2   Path online
node2:scld02zc2priv1   node1:scld02zc2priv1   Path online

# ipadm show-addr | egrep "scld02zc2priv1|scld02zc2priv2|clprivnet2"
scld02zc2priv1/?  static   ok           172.18.4.66/26
scld02zc2priv2/?  static   ok           172.18.4.130/26
clprivnet2/?      static   ok           172.18.4.2/26
# grep -v ^# /etc/firewall/pf.conf | grep -v ^$
set reassemble yes no-df
set skip on lo0
ext_if="sc_ipmp0"
client_out="{22, 2084, 5201, 111, 8059, 8060, 8061, 8062, 6499, 11161, 11162, 11163, 11164, 11165}"
pass in quick on scld02zc2priv1 all flags any
pass in quick on scld02zc2priv2 all flags any
pass in quick on clprivnet2 all flags any
block in log quick on egress proto tcp to port { 22 }
block return log all
pass in log proto tcp from any to any port 22 <> 23
pass out log proto tcp from any to any
pass in log proto udp from any to any
pass out inet proto icmp all icmp-type echoreq keep state
pass in log proto icmp from any to any
pass out on $ext_if proto udp all
pass out
#

Next Steps

Configure Oracle Solaris Cluster software on the cluster nodes. Go to Establishing a New Global Cluster or New Global-Cluster Node.