JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle® ZFS Storage Appliance Administration Guide
Oracle Technology Network
Library
PDF
Print View
Feedback
search filter icon
search icon

Document Information

Using This Documentation

Chapter 1 Oracle ZFS Storage Appliance Overview

Chapter 2 Status

Chapter 3 Initial Configuration

Chapter 4 Network Configuration

Network Configuration Page

Devices

Datalinks

Network Interfaces

Network IP MultiPathing (IPMP)

Network Performance and Availability

Network Routing Configuration

Network Routing Entries

Network Routing Properties

Network Configuration Using the BUI

Network Configuration Page

Network Addresses

Network Routing Page

Network Configuration Using the CLI

Network Configuration Tasks Using the BUI

Creating a single port interface

Modifying an interface

Creating a single port interface, drag-and-drop

Creating an LACP aggregated link interface

Creating an IPMP group using probe-based and link-state failure detection

Creating an IPMP group using link-state only failure detection

Extending an LACP aggregation

Extending an IPMP group

Creating an InfiniBand partition datalink and interface

Creating a VNIC without a VLAN ID for clustered controllers

Creating VNICs with the same VLAN ID for clustered controllers

Adding a static route

Deleting a static route

Network Configuration Tasks Using the CLI

Adding a static route

Deleting a static route

Changing the multihoming property to strict

Chapter 5 Storage Configuration

Chapter 6 Storage Area Network Configuration

Chapter 7 User Configuration

Chapter 8 Setting ZFSSA Preferences

Chapter 9 Alert Configuration

Chapter 10 Cluster Configuration

Chapter 11 ZFSSA Services

Chapter 12 Shares, Projects, and Schema

Chapter 13 Replication

Chapter 14 Shadow Migration

Chapter 15 CLI Scripting

Chapter 16 Maintenance Workflows

Chapter 17 Integration

Index

Network IP MultiPathing (IPMP)

IP MultiPathing groups are used to provide IP addresses that will remain available in the event of an IP interface failure (such as a physical wire disconnection or a failure of the connection between a network device and its switch) or in the event of a path failure between the system and its network gateways. The system detects failures by monitoring the IP interface's underlying datalink for link-up and link-down notifications, and optionally by probing using test addresses that can be assigned to each IP interface in the group, described below. Any number of IP interfaces can be placed into an IPMP group so long as they are all on the same link (LAN, IB partition, or VLAN), and any number of highly-available addresses can be assigned to an IPMP group.

Each IP interface in an IPMP group is designated either <i>active</i> or <i>standby</i>:

Multiple active and standby IP interfaces can be configured, but each IPMP group must be configured with at least one active IP interface. IPMP will strive to activate as many standbys as necessary to preserve the configured number of active interfaces. For example, if an IPMP group is configured with two active interfaces and two standby interfaces and all interfaces are functioning correctly, only the two active interfaces will be used to send and receive data. If an active interface fails, one of the standby interfaces will be activated. If the other active interface fails (or the activated standby fails), the second standby interface will be activated. If the active interfaces are subsequently repaired, the standby interfaces will again be deactivated.

IP interface failures can be discovered by either link-based detection or probe-based detection (i.e., a test address is configured).

If probe-based failure detection is enabled on an IP interface, the system will determine which target systems to probe dynamically. First, the routing table will be scanned for gateways (routers) on the same subnet as the IP interface's test address and up to five will be selected. If no gateways on the same subnet were found, the system will send a multicast ICMP probe (to 224.0.01. for IPv4 or ff02::1 for IPv6) and select the first five systems on the same subnet that respond. Therefore, for network failure detection and repair using IPMP, you should be sure that at least one neighbor on each link or the default gateway responds to ICMP echo requests. IPMP works with both IPv4 and IPv6 address configurations. In the case of IPv6, the interface's link-local address is used as the test address.


Note -  Do not use probe-based failure detection when there no systems (other than the cluster peer) on the same subnet as the IPMP test addresses that are configured to answer ICMP echo requests.

The system will probe selected target systems in round-robin fashion. If five consecutive probes are unanswered, the IP interface will be considered failed. Conversely, if ten consecutive probes are answered, the system will consider a previously failed IP interface as repaired. You can set the system's IPMP probe failure detection time from the IPMP screen. This time indirectly controls the probing rate and the repair interval -- for instance, a failure detection time of 10 seconds means that the system will send probes at roughly two second intervals and that the system will need 20 seconds to detect a probe-based interface repair. You cannot directly control the system's selected targeted systems, though it can be indirectly controlled through the routing table.

The system will monitor the routing table and automatically adjust its selected target systems as necessary. For instance, if the system using multicast-discovered targets but a route is subsequently added that has a gateway on the same subnet as the IP interface's test address, the system will automatically switch to probing the gateway. Similarly, if multicast-discovered targets are being probed, the system will periodically refresh its set of chosen targets (e.g., because some previously selected targets have become unresponsive).

Step-by-step instructions for building IPMP groups are here Network IP MultiPathing (IPMP) .

For information about private local interfaces, see Chapter 10, Cluster Configuration.