JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle® ZFS Storage Appliance Administration Guide
Oracle Technology Network
Library
PDF
Print View
Feedback
search filter icon
search icon

Document Information

Using This Documentation

Chapter 1 Oracle ZFS Storage Appliance Overview

Chapter 2 Status

Chapter 3 Initial Configuration

Chapter 4 Network Configuration

Chapter 5 Storage Configuration

Chapter 6 Storage Area Network Configuration

Chapter 7 User Configuration

Chapter 8 Setting ZFSSA Preferences

Chapter 9 Alert Configuration

Chapter 10 Cluster Configuration

Cluster Features and Benefits

Cluster Disadvantages

Cluster Terminology

Understanding Clustering

Cluster Interconnect I/O

Understanding Cluster Resource Management

Cluster Takeover and Failback

Configuration Changes in a Clustered Environment

Clustering Considerations for Storage

Clustering Considerations for Networking

Private Local IP Interfaces

Clustering Considerations for Infiniband

Clustering Redundant Path Scenarios

Preventing 'Split-Brain' Conditions

Estimating and Reducing Takeover Impact

Cluster Configuration Using the BUI

Configuring Clustering

Unconfiguring Clustering

Configuring Clustering Using the CLI

Shutting Down a Clustered Configuration

Shutdown the Stand-by Head

Unconfiguring Clustering

Cluster Node Cabling

ZS3-2 Cluster Cabling

ZS3-4 and 7x20 Cluster Cabling

Storage Shelf Cabling

Cluster Configuration BUI Page

Chapter 11 ZFSSA Services

Chapter 12 Shares, Projects, and Schema

Chapter 13 Replication

Chapter 14 Shadow Migration

Chapter 15 CLI Scripting

Chapter 16 Maintenance Workflows

Chapter 17 Integration

Index

Configuring Clustering

  1. Connect power and at least one Ethernet cable to each appliance.
  2. Cable together the cluster interconnect controllers as described below under Node Cabling. You can also proceed with cluster setup and add these cables dynamically during the setup process.
  3. Cable together the HBAs to the shared JBOD(s) as shown in the JBOD Cabling diagrams in the setup poster that came with your appliance.
  4. Power on both appliances - but do not begin configuration. Select only one of the two appliances from which you will perform configuration; the choice is arbitrary. This will be referred to as the primary appliance for configuration purposes. Connect to and access the serial console of that appliance, and perform the initial tty-based configuration on it in the same manner as you would when configuring a standalone appliance. Note: Do not perform the initial tty-based configuration on the secondary appliance; it will be automatically configured for you during cluster setup.
  5. On the primary appliance, enter either the BUI or CLI to begin cluster setup. Cluster setup can be selected as part of initial setup if the cluster interconnect controller has been installed. Alternately, you can perform standalone configuration at this time, deferring cluster setup until later. In the latter case, you can perform the cluster configuration task by clicking the Setup button in Configuration->Cluster.
  6. At the first step of cluster setup, you will be shown a diagram of the active cluster links: you should see three solid blue wires on the screen, one for each connection. If you don't, add the missing cables now. Once you see all three wires, you are ready to proceed by clicking the Commit button.
  7. Enter the appliance name and initial root password for the second appliance (this is equivalent to performing the initial serial console setup for the new appliance). When you click the Commit button, progress bars will appear as the second appliance is configured.
  8. If you are setting up clustering as part of initial setup of the primary appliance, you will now be prompted to perform initial configuration as you would be in the single-appliance case. All configuration changes you make will be propagated automatically to the other appliance. Proceed with initial configuration, taking into consideration the following restrictions and caveats: Network interfaces configured via DHCP cannot be failed over between heads, and therefore cannot be used by clients to access storage. Therefore, be sure to assign static IP addresses to any network interfaces which will be used by clients to access storage. If you selected a DHCP-configured network interface during tty-based initial configuration, and you wish to use that interface for client access, you will need to change its address type to Static before proceeding. Best practices include configuring and assigning a private network interface for administration to each head, which will enable administration via either head over the network (BUI or CLI) regardless of the cluster state. If routes are needed, be sure to create a route on an interface that will be assigned to each head. See the previous section for a specific example.
  9. Proceed with initial configuration until you reach the storage pool step. Each storage pool can be taken over, along with the network interfaces clients use to reach that storage pool, by the cluster peer when takeover occurs. If you create two storage pools, each head will normally provide clients with access to the pool assigned to it; if one of the heads fails, the other will provide clients with access to both pools. If you create a single pool, the head which is not assigned a pool will provide service to clients only when its peer has failed. Storage pools are assigned to heads at the time you create them; the storage configuration dialog offers the option of creating a pool assigned to each head independently. The smallest unit of storage that may be assigned to a pool is one disk. If you create multiple pools, there is no requirement that they must be the same size. Note that fewer pools with more disks per pool are preferred because they simplify management and provide a higher percentage of overall usable capacity. It is recommended that each pool includes a minimum of 8 disks, and ideally more, across all JBODs.
  10. After completing basic configuration, you will have an opportunity to assign resources to each head. Typically, you will need to assign only network interfaces; storage pools were automatically assigned during the storage configuration step.
  11. Commit the resource assignments and perform the initial fail-back from the Cluster User Interface, described below. If you are still executing initial setup of the primary appliance, this screen will appear as the last in the setup sequence. If you are executing cluster setup manually after an initial setup, go to the Configuration/Cluster screen to perform these tasks. Refer to Cluster User Interface below for the details.