Skip Headers
StorageTek Virtual Library Extension Planning Guide

E41530-04
  Go To Table Of Contents
Contents
Go To Index
Index

Previous
Previous
 
Next
Next
 

3 VLE Planning

This chapter provides information about VLE planning topics.

Satisfying Mainframe Host Software Requirements

For ELS 7.2, support for VLE 1.4 is included in the base level. For ELS 7.0 and 7.1, get the latest SMP/E receive HOLDDATA and PTFs described in Table 3-1 and SMP/E APPLY with GROUPEXTEND.

Table 3-1 ELS Supporting PTFs for VLE

ELS 7.0 ELS 7.1

L1H16C1

L1H16J6

L1H1672

L1H1674


Satisfying Network Infrastructure Requirements

If possible, do any configuration of IP addresses, network switch(es) for VLANs or other setup (running cables, and so forth) before the VLE arrives to minimize the installation time. Ensure that the network is ready for connection to the VLE as follows:

  • Gigabit Ethernet protocol is required on all network switches and routers that are directly attached to VSM5 IFF cards. The IFF card will only do speed negotiation to the 1 Gb speed.

  • Switches and routers should support Jumbo(mtu=9000) packets for best performance. If the network is not capable of handling jumbo frames, turn off this capability at the VTSS.


    Note:

    If jumbo frames are enabled then all switches, hubs, or patch panels (including the VLAN and the port channel) between the VLE and its target component must also have jumbo frames enabled.

  • Check that you are using the proper (customer-supplied) 1GigE Ethernet cables:

    • CAT5 cables and below are not acceptable for GigE transmission.

    • CAT5E cable: 90 meters is acceptable if run through a patch panel, 100 meters if straight cable.

    • CAT6 cable: 100 meters is acceptable regardless of patch panel configuration.

  • StorageTek recommends that if a switch or router is used in the configuration, at least two switches or routers be part of the configuration at each location so that the loss of one unit will not bring down the whole configuration.

  • Only one TCP/IP connection is required between a VTSS and a VLE. However, for redundancy, StorageTek strongly recommends that you have a total of four connections between the VTSS and VLE where the VTSS connections are separate IP addresses. Each TCP/IP connection from a specific VTSS to a specific VLE should be to separate VLE interfaces. If you connect all the VTSS connections to the same VLE interface, you have a single point of failure at the VLE interface.

    In a VLE multi-node system, the VTSS connections should be spread evenly across all nodes. For example, in a two-node VLE, the VTSS connections should be two on node 1 and the other two on node 2. On a four-node VLE, 1 VTSS connection to each node is recommended. If a switch is involved between the VTSS and VLE, then it is possible to have all four connections to each node of a four-node VLE. Because each VTSS connection represents four drives total, then there would be one drive from each connection to each node for a total of four drives for each node on a four-node VLE.

    IP addresses, however, must never be duplicated on separate nodes in the VLE for UUI or VTSS. For example, if you have a UUI connection of 192.168.1.1 going to node 1, then do not make a UUI connection on another node using 192.168.1.1 as the IP address! Additionally, if possible, you should never have two interfaces on the same node within the same subnet when configuring IP addresses.

  • Similarly, only one UUI connection is required between a VLE and the host, but two are recommended for redundancy, preferably using two independent network paths. Note that these network paths are separate from the connections to the VTSS. For VLE multi-node configurations, if there are multiple UUI connections, make them from separate nodes in the VLE.

Satisfying Oracle Switch Hardware Requirements

The Oracle switch is required for three node or greater VLEs, and can be used for two-node VLEs.


Note:

Your VLE is shipped with the following components:

  • The dual-port 10GigE NIC cards shown in Figure 3-4. These cards have installed the pluggable fiber transceivers.

  • VLEs are shipped with a 1M fiber optic cable connecting ixgb0 to ixgb2. For single node systems, leave the cable connected. VLEs are also shipped with two 25M fiber optic cables that are connected to ixgbe1 and ixgbe3 (the free ends are affixed to the rack). For single-node systems, leave the free ends of the 25M fiber optic cables affixed to the rack. If you are going to use ixgbe1 and ixgbe3 for VLE-to-VLE connections, remove the 25M cables from ixgbe1 and ixgbe3 to make these ports available. If you are making multi-node connections, remove the 1M cable from ixgb0 to ixgb2 and the 25M cables from ixgbe1 and ixgbe3 to make these ports available. You can then use the 25M cables for node-to-node connections.

Order the switch, part X2074A-R to install the switch in a Sun Rack II cabinet.

For each of the first four VLE Nodes that connect through the switch, order two each of the following:

  • SFP #X2129A-N

  • The appropriate length of LC/LC optic fibre cable, which must be OM3, 850nm, multi-mode, maximum length, including patch panels, of 35M. Two cables per VLE are required to connect to the switch.

For VLE nodes 5 through 7 in the network, in addition to the above, order two total (not two per node) of the following:

  • X2124A-N QSFP parallel fiber optics short wave transceivers.

  • X2127A-10M QSFP optical cable splitters. These cable splitters are a single 10M long cable which splits after 9M into four cables for the final one meter in length. Each of these four cable lengths has an LC connector on the end and for the fifth VLE node, you will be able to plug one of the cable ends directly into the VLE node. The overall length of the cable is 10M so that VLE will need to be closer to the switch than VLEs connected by a 35M (or shorter) cable.

For each of 6-7 (or if you need more cable for node 5), ensure that you have two of the following:

  • LC/LC couplers 10800160-N Spare: LC DUPLEX COUPLING RECEPTACLES.

  • LC/LC fibre optic cables no more than 25 meters in length, OM3, 850nm, multimode. The limit is 25 meters because the cables must connect to the QSFP cable, which is 10M, using the couplers.

Satisfying Serviceability Requirements

The VLE product uses a standard Oracle service strategy common with other Oracle products. Automated Service Response (ASR) is used by the VLE as the outgoing event notification interface to notify Oracle Support that an event has occurred on the VLE and the system may require service. Additionally, in combination with ASR, an outgoing email containing details about an ASR event and a Support File Bundle containing VLE log information necessary to investigate any ASR event will also be sent.

The advantages of ASR functionality are well documented in the ASR FAQ available on the My Oracle Support site (https://support.oracle.com/CSP/ui/flash.html) in Knowledge Article Doc ID 1285574.1.

Oracle's expectation is that the VLE will be configured to allow outgoing ASR and email communication with Oracle Support. To support VLE outgoing ASR notifications, the customer will need to supply the information in Table 3-2 to the installing Oracle Field Engineer.

Table 3-2 CAM Configuration Information

Configuration Value Example

General Configuration - Site Information

Company Name

Company Inc

Site Name

Site A

City

AnyTown


General Configuration - Contact Information

First Name

Joe

Last Name

Companyperson

Contact email

joecompanyperson@company.com


Auto Service Request (ASR) Setup - Oracle Online Account Information

Customer Oracle CSI Login Name

joecompanyperson@company.com

Customer Oracle CSI Login Password

********



Auto Service Request (ASR) Setup - Internet Connection Settings (Optional)

Proxy Host Name

web-proxy.company.com

Proxy Port

8080

Proxy Authentication - User Name


Proxy Authentication - Password




Note:

In Table 3-2, some fields are not required if a proxy server is not being used or if it does not require an ID and password. If the customer will not provide the CSI email ID and password, then the customer can enter it directly during the install process. ASR registration takes place during the CAM configuration portion of the VLE install. During this part of the install, the VLE will register itself on the Oracle servers as an ASR qualified product.

The customer is then required to log into My Oracle Support (MOS) and approve the registration of the VLE. Until this approval is completed by the customer, the VLE is not capable of auto-generating cases through MOS.


For email notification of event and log information, the customer must also supply the information in Table 3-3. If the email server does not require a user name and password, these fields can remain blank.

Table 3-3 Notification Setup - Email Configuration Options / ConfCollectStatus

Configuration Value Example

Email Configuration - SMTP Server Name

SMTP.company.com

Email Configuration - SMTP Server User Name


Email Configuration - SMTP Server User Password


Email Recipients

vle@invisiblestorage.com and others as needed


In cases where outgoing communication steps are not completed at the time of installation or not allowed at all, Oracle's options for timely response to events that require support from the Oracle Service team are greatly reduced. The VLE can be configured to send email containing event and log information directly to a designated customer internal email address. A recipient of this email can then initiate a service request directly with Oracle and forward any emails received from the VLE to Oracle Support. In this case, the customer must supply the email address where VLE emails are sent, where this email address can accept emails of up to 5M.

ASR Configuration

By default the VLE will send ASRs through the igb0 port. The site's mail server will be used to send the ASR alerts and the VLE support file bundles. When configuring CAM to send ASRs, it is necessary to input the customer SunSolve email ID and password. When configuring CAM, the customer provides the Oracle CSI email address and password or inputs this information directly into the CAM GUI at the time the CAM Configuration Procedure is performed.

Determining VLE Configuration Values

The following sections tell how to determine configuration values for the VLE.


Note:

As noted in the following sections, several software configuration values must match values initially set during configuration of the VLE. Use the IP_and VMVC_Configuration.xls worksheet to record these values so you can pass them on to the personnel who will configure the VLE and the host software.

Determining Values for the Configuration Scripts

To configure the network for VLE, you run the configure_vle script on each node in a multi-node system (or the only node in a single-node system).

In Figure 3-1:

1 - VLE name from configure_vle installation script run on each node

2 - Node name entered as "hostname" for this node in the configure_vle installation script

Figure 3-1 VLE Name, VLE Number and Node Name

Surrounding text describes Figure 3-1 .

VLE Name and VLE Number

Each VLE node (connected through the same internal network) has a common VLE name and VLE number (1). The VLE name and number must be the same on each node in a multi-node VLE, where the node name is 2.

The VLE Name must be unique and should not be the hostname of any of the servers. The default VLE Name is VLE-NAME. You can reset the VLE Name when you run the setup_vle_node script. The value must be 1 to 8 characters in length, alphanumeric, uppercase. The name can contain a - (dash) but not at the beginning or the end.

Valid values for the VLE-number are 1-9

In Figure 3-1, the VLE-Name and VLE-Number combination is DVTGRID8.

To the host software, the VLE-Name and VLE-Number combination is known as the subsystem name, and is specified in the following:

  • The STORMNGR parameter value on the VTCS CONFIG TAPEPLEX statement for the TapePlex that connects to the VLE or the NAME parameter on the CONFIG STORMNGR statement (ELS 7.1 and above).

  • The STORMNGR parameter value on the VTCS CONFIG RTD statement for the VLE.

  • The NAME parameter value on the SMC STORMNGR command that defines the VLE to SMC.

  • The STORMNGR parameter value on the SMC SERVER command for the VLE.

  • The STORMNGR parameter value on the HSC STORCLAS statement.

Host Name for the Node

As shown in Figure 3-1, the Host Name for the Node, which is entered on the configure_vle script, appears as:

  • The Port's Host Name for the igb0 interface ID for the node.

  • The Host Name for the node selected in the node navigation tree.

In Figure 3-1, the Host Name for the node is dvtvle1.

Characters can be alpha-numeric (A-Z, a-z, 0-9) or . or -. The first and last characters of the string cannot be ”.” or ”-”. The name cannot be all-numeric. The name can be up to 512 characters long, though Internet standards and CAM limitations require that the host portion (not including the domain component) be limited to a maximum of 24 characters.

Determining Values for configure_vle

Required values for the configure_vle script include the following:

  • Hostname for the node; see "Host Name for the Node"

  • VLE static IP address for port igb0

  • Network number, which is the base address of the customer subnet

  • Netmask

  • The default router IP address (Gateway address)

  • The network domain name

  • The Name Server IP addresses

  • Network search names

  • NTP server/client setup (server or client, IP addresses of servers) and date/time values

Determining Values for setup_vle_node

Required values for the setup_vle_node script include the following:

  • VLE number and name; see "VLE Name and VLE Number".

  • Serve Node number (SSN). For multi-node VLEs, each node requires a unique SSN. Valid values for SSN are 1 to 64.

  • Server time and date values.

Determining Values for Port Card Configuration

To configure the VLE Ethernet ports, you use the Connectivity View, Port Card Configuration tab shown in Figure 3-2. The following sections tell how to determine port card configuration values.

In Figure 3-2:

1 - Selected interface.

2 - Destination Routes panel to define remote VLE connections and static routes.

3 - Type of route shown by icons.

4 - Clear Netmask field by selecting blank item at top of drop down list.

5 - Content of bottom pane is filtered by interface selected in top pane. Click this button to show all routes for node.

Figure 3-2 VLE GUI Port Card Configuration Tab

Surrounding text describes Figure 3-2 .

Interface IDs

The Interface IDs identify the port. You can correlate this ID to ports using the Solaris command status_vle_ips. These identifiers are established before the VLE hardware is delivered, and cannot be modified.

Figure 3-3 shows the 1GigE Ethernet ports (igb4 to igb19) on the rear of the server.

1 - igb4, igb5, igb6, igb7 (from top to bottom)

2 - igb8, igb9, igb10, igb11 (from top to bottom)

3 - igb16, igb17, igb18, igb19 (from top to bottom)

4 - igb12, igb13, igb14, igb15 (from top to bottom)

Figure 3-3 VLE 1GigE Ethernet Data Ports

Surrounding text describes Figure 3-3 .

The 1GigE Ethernet ports are general purpose ports that can be used for UUI connection, Replication (VLE to VTSS data exchange), Remote Listener (VLE to VLE data exchange) or any combination of all three types.

VLE servers include two dual-port 10GigE NIC cards per server as shown in Figure 3-4.

1 - ixebe0, ixebe1 (from top to bottom)

2 - ixebe2, ixebe3 (from top to bottom)

Figure 3-4 VLE Dual-port 10GigE NIC cards

Surrounding text describes Figure 3-4 .

As Figure 3-4 shows, the 10GigE ports (in the red boxes) have interface IDs of ixgbe0-ixgbe3.


Note:

VLEs are shipped with a 1M fiber optic cable connecting ixgb0 to ixgb2. For single node systems, leave the cable connected. VLEs are also shipped with two 25M fiber optic cables, which you can use to connect multi-node system.

Ports ixgbe0 and ixgbe2 are reserved for:

  • Connections to the Oracle switch for three-node or greater configurations.

  • Direct connections to another node in two-node configurations. To connect two nodes, you can do one of the following:

    • Directly connect ixgbe0 on one node to ixgbe0 on the second node, and ixgbe2 on one node to ixgbe2 on the second node.

    • Connect the nodes through the Oracle switch.

Ports ixgbe1 and ixgbe3 are general purpose ports that can be used for UUI connection, Replication (VLE to VTSS data exchange), Remote Listener (VLE to VLE data exchange) or any combination of all three types.

VLE Ethernet Management Ports

The management ports are marked on the back of the case NET0-NET3 as shown in Figure 3-5.

Figure 3-5 VLE Ethernet Management Ports

Surrounding text describes Figure 3-5 .

As shown in Figure 3-5:

  • The management ports (igb0 through igb3) can be on a network segment that is private or public and are typically used as follows:

    • igb0 (NET0) - Reserved for connection to the network for ASR traffic and managing the VLE software.

    • igb1 (NET1) - General purpose port, typically used for connection to the network for UUI (control path) traffic.

    • igb2 (NET2) - General purpose port, typically used for redundant UUI connection, or if you want separate ports for separate network segments for the host network and for the sending of ASR alerts.

    • igb3 (NET3) - Reserved as dedicated port for service. Note that this port can use a single cable and function as both the service port and the ILOM port. Do not connect this port to the network. igb3 must remain free and open as an Ethernet port with known access configuration so that it will always be available for service. The preconfigured default IP address for igb3 are:

      • 10.0.0.10 for use as a service port. You use igb3 as a service port to access the VLE CLI.

      • 10.0.0.1 for use as an ILOM port.

Port's Host Name

The value is the machine (host) name for each IP address to be connected to a VTSS or another VLE. Characters can be alpha-numeric (A-Z, a-z, 0-9) or "." or "-". The first and last characters of the string cannot be "." or "-". The name cannot be all-numeric. The name can be up to 512 characters long, though Internet standards and CAM limitations require that the host portion (not including the domain component) be limited to a maximum 24 characters. Note that the Port's Host Name for igb0 and igb3 are established during installation, and cannot be changed at the GUI.

IP Address

The IP address assigned to the port, which must be a valid IP v4 address, in the form of "192.68.122.0". Each byte must be 0-255, there must be 4 bytes, numeric only except for the decimal points.

Netmask

The network mask for the port, which must be a valid IP v4 address, in the form of "255.255.255.0". Each byte must be 0-255, there must be 4 bytes, numeric only except for the decimal points.

Replication

Select the check box for each port that will be used for VLE to VTSS data exchange.

UUI

Select the check box for each port that will be used for UUI activity. This port is usually the one used for product configuration and monitoring (including the port used by the GUI browser connection).


Note:

Each VLE must have at least one UUI connection, and two or more are recommended for redundancy. If you have two or more in a multi-node VLE, spread the UUI connections out over different nodes.

Remote

This check box identifies the port as a ”Listener” destination for a VLE-to-VLE data exchange. For VLE to VLE data transfers, the two unused 10GigE connections (ixgbe1 and ixgbe3) or any unused 1GigE connection can be used from any node in a VLE. If each VLE has two or more nodes, StorageTek recommends a minimum of one connection from each node to the other VLE. You can run more than one connection from a VLE node to another VLE's node but you should never run multiple connections from a VLE node to a single port on the other VLE. If both VLE's have more than one node, StorageTek recommends spreading the VLE to VLE connections across all nodes in each VLE.

For example, VLE1 node 1 has a connection from 192.168.1.1 to VLE2 node 1 at 192.168.1.2. If a second connection is made from VLE node 1, then the connection should not go to VLE2 at 192.168.1.2.

For VLE to VLE data transfers, each VLE requires a UUI connection and an VTSS connection. This will ensure VTCS can migrate and recall VTVs from either VLE.

Determining VMVC Range Configuration Values

Ensure that you assign VMVC names and ranges to fit within the site's naming scheme. VMVC names and ranges are set by the CSE during configuration, so it is best to have them assigned before configuration.

As shown in Figure 3-6, you use theVLE GUI's Create New VMVC dialog box (from the VMVC View with a specific node selected in the navigation tree) to specify volser ranges of new VMVCs.

Figure 3-6 VLE GUI Create New VMVC dialog box

Surrounding text describes Figure 3-6 .

You determine values for each of the fields in Figure 3-6 as follows:

  • Each of the fields allows 0-6 alpha-numeric characters, with the ”assembly” limitations below.

  • Alphabetic characters are automatically converted to upper case; leading and trailing spaces in all fields are automatically removed.

  • Any of the fields can be empty, allowing the incremental value to be first, last, or in the middle of the volser range name.

  • Any of the fields can be either alphabetic or numeric, with field validations to restrict their usage where necessary. For instance, embedded spaces and special characters are not allowed. Invalid field entries are shown with a red box around the field, and selecting the OK button will display an error warning.

  • The ”Incremental” range fields (prefix and suffix) can be either alphabetic or numeric. Field validations ensure that alphabetic and numeric characters are not mixed in either field, the first value must be less than the last value, and max range limits are checked.

  • The length of the entire volser name range is constructed by assembly of each field – the length of the prefix + length of ranges + length of the suffix.

    For example, you could enter a prefix of AB, a first of range of 001, a last of range of 500 and a suffix of X to build the volser name range of AB001X - AB500X. Similar combinations can be built. But the length of the entire assembly must add up to exactly six characters.

  • If the built-up name exceeded the valid 6-character volser name length (like AB0001XY - AB1500XY), clicking the OK button displays a warning dialog and does not allow the entry.

  • As the range is being built by editing fields, the resulting range is displayed on a line of the dialog just above the OK and Cancel buttons. The count of VMVCs in the range being built is also displayed in parentheses with the range. If the count exceeds the maximum allowed for the Wildcat box (shown in the ”VMVC Counts” fields as Max), the text is displayed in bold orange. At the time the OK button is pressed, the current Available count is checked, and if the range exceeds this amount, an error dialog is displayed.

  • The suffix string must begin with a different character type (alphabetic, not numeric) than the incremental range strings. This is for compatibility with VTCS volser name range entry capability. If the range contains the same character type as the beginning of the suffix, the beginning characters of the suffix would be incremented in a range before those in the range fields; that is, VTCS volser name processing is based on character type, not by field-entry of ranges. For example, a GUI entry of 1000 for the First of range, 1094 for the Last of range, and a suffix of 55 would make a range of 100055-109455. On VTCS, this would expand to 100055, 100056, 100057…109455 rather than 100055, 100155, 100255…109455. Because it would be difficult for you to match the latter expansion in VTCS volser name range entry, this construction is prohibited in the GUI.

  • If you attempt to define overlapping ranges, only new VMVCs in the range will be added to any already-existing VMVCs (existing VMVCs will not be overwritten or cleared).

  • VMVCs have a nominal size of 250 GB (to the host software) and an effective size on the VLE of 1TB (assuming 4:1 compression). Table 3-4 shows the maximum VMVCs you can define for each VLE node capacity.

Table 3-4 VLE Effective Capacities - Maximum VMVCs Per Node

VLE Effective Capacity Maximum VMVCs

200 TB

200

400 TB

400

800 TB

800

1600 TB

1600


  • The VMVC volser ranges you specify in the VLE GUI must match the volser ranges defined to VTCS!

Planning for Encryption

VLE 1.1 and above provides encryption of VMVCs written to the VLE system. If a VTV is recalled to the VTSS, it is de encrypted at the VLE before recall; therefore, the MVS host software has no knowledge of encryption.


Note:

  • The encryption algorithm used is AES-256-CCM. The access key is a 256 bit file.

  • FIPS 140-2 certification request has been filed with NIST and is in progress.


Note that encryption is enabled, disabled, and managed at the VLE GUI by a StorageTek CSE or other QSP. Encryption is enabled on a per node basis through an encryption key stored on the node and backed up on a USB device. You can mix encryption and non-encryption nodes in a multi-node VLE because VLE de encrypts VTVs, if required, regardless of where they reside on a multi-node VLE. If, however, you want to encrypt all VTVs on a multi-node VLE, then encryption must be enabled for all nodes.

Some implementation notes:

  • Before encryption is enabled, there must be no VMVCs on the node. Additionally, the USB key backup must be inserted in the node's USB port, and must be writeable and mounted by the operating system.

  • Similarly, before encryption is disabled, recall VTVs that you want to keep to the VTSS, then delete all VMVCs from the node.

  • Encryption keys do not expire, so do not generate a new key unless you must (for example, to meet security audit requirements). Before you assign a new key:

    • The USB key backup must be inserted in the node's USB port, and must be writeable and mounted by the operating system.

    • If you are certain you want to generate a new key, ignore the warning and overwrite the old key.

Planning for Deduplication

Deduplication eliminates redundant data in a VLE complex. As the deduplication percentage increases, migration performance can correspondingly improve and network use is reduced.

VLE deduplication is performed at the VLE, so the host job and the VTSS are not affected. When a deduplicated VTV is recalled, the VTV is ”rehydrated” (reconstituted) at the VLE before it is recalled to the VTSS. Deduplication occurs on a tape block level within each node and small blocks (less than 4K after compression) are not deduplicated.

Deduplication, which is controlled by the STORCLAS DEDUP parameter, increases the effective VLE capacity and is performed by the VLE before the VTV is written to a VMVC. For example, Example 3-1 shows deduplication enabled for two Storage Classes.

Example 3-1 Deduplication Enabled for Local and Remote Storage Classes

STOR NAME(VLOCAL) STORMNGR(VLESERV1) DEDUP(YES)
STOR NAME(VREMOTE) STORMNGR(VLESERV2) DEDUP(YES)

The STORCLAS statements in Example 3-1 specify deduplication for a ”local” Storage Class (VLOCAL) on the VLE VLESERV1 and ”remote” Storage Class (VREMOTE) on the on the VLE VLESERV2.

Example 3-2 shows a Management Class that performs deduplication on the Storage Classes in Example 3-1. Any jobs that specify the DEDUP2 Management Class enable deduplication for the referenced Storage Classes.

Example 3-2 Management Class for Deduplication

MGMT NAME(DEDUP2) MIGPOL(VLOCAL,VREMOTE)

Note:

Deduplication occurs only after the DEDUP(YES) policy is set; that is, there is no retroactive deduplication.

Deduplication Guidelines

That's a quick ”how to” for deduplication, now what are the guidelines for what data should and should not be deduplicated? Many sources of mainframe data do not benefit from deduplication, such as syslogs. Generally data streams that contain timestamps (where every record is different) will not benefit from deduplication. Backup data streams (where the same records may be written multiple times) will typically benefit from deduplication.

Using the SCRPT Report

After deduplication is enabled, how do you know how well it is working? You can monitor the results with the SCRPT report, as shown in the example in Figure 3-7

Figure 3-7 SCRPT Report

Surrounding text describes Figure 3-7 .

In Figure 3-7, the approximate reduction ratio for the data, which is Uncompressed Gb divided by Used Gb. The Reduction Ratio, therefore, includes both VTSS compression and VLE deduplication. A larger reduction ratio indicates more effective compression and deduplication.

For example, the VTSS receives 16 Mb of data, compresses it to 4Mb, and writes the compressed data to a VTV. VLE subsequently deduplicates the VTV to 2Mb and writes it to a VMVC. Thus, the reduction ratio is 16Mb divided by 2Mb or 8.0:1.

Because the calculation is done using Mb, it is possible to see 0Gb in the Used or Uncompressed fields, yet see a reduction ratio other than 1.0:1.

Using the MEDVERIFY Utility

You can run the MEDVERify utility to verify that VTV data can be read on VMVCs (ELS 7.1 and VLE 1.2 and above only). For VLE, MEDVERify ensures that deduplicated VMVCs can be ”rehydrated” (reconstituted) when recalled to the VTSS. MEDVERify reports on VMVCs that pass or fail verification and also produces XML output.

For example, to verify VTVs on the VMVCs defined in Example 3-1, enter:

MEDVER STOR(VLOCAL)
MEDVER STOR(VREMOTE)

In this example:

  • MEDVERify selects VMVCs in Storage Classes VLOCAL and VREMOTE.

  • MAXMVC defaults to 99.

  • CONMVC defaults to 1 so only a single VMVC is processed at a time.

  • No time out is specified.

Reduced Replication

VLE 1.3 and above offers Reduced Replication, which, through VLE-to-VLE replication, allows VTVs to be copied in deduplicated format. The only data copied is data that did not reside on the destination VLE when the copy began. Reduced replication, therefore, reduces the amount of data copied, which lowers network use and copy times. To optimize Reduced Replication, ensure that deduplication is enabled for both the source and target Storage Class. Otherwise:

  • If deduplication is enabled for the source but not the destination Storage Class, then VTVs are ”hydrated” (reconstituted) before being copied.

  • If deduplication is enabled for the destination but not the source Storage Class, then VTVs are deduplicated when received at the destination.

For example, Example 3-3 shows a Management Class that performs Reduced Replication using the Storage Classes in Example 3-1.

Example 3-3 Management Class for Reduced Replication

MGMT NAME(REDREP) MIGPOL(VLOCAL,VREMOTE)

In Example 3-3 both Storage Classes are enabled for deduplication. Because the corresponding VLEs are connected and configured for VLE-to-VLE replication, any jobs that specify the REDREP Management Class produce Reduced Replication.

Planning for Link Aggregation

Link aggregation is available for IP configuration for VLE 1.4. A link aggregation consists of multiple interfaces on a VLE node that are configured together as a single, logical unit and share a common IP address. Figure 3-8 shows the Connectivity View, Port Aggregations tab, which you use view the pre-defined ”internal” aggregation port (such as AggrNode1) and its associated interfaces. You can also define and modify new custom aggregations using this tab.

In Figure 3-8:

1 - Currently selected aggregation.

2 - Drag up or down to resize panes.

3 - Drop down selection list of options.

4 - Pool of port interfaces available for aggregations

5 - Interfaces in currently selected aggregation.

6 - Ports greyed out if wrong speed for aggregation.

7 - Move interfaces into and out of aggregations with arrow buttons.

Figure 3-8 VLE GUI Connectivity View, Port Aggregations Tab

Surrounding text describes Figure 3-8 .

Benefits of Link Aggregation

Link aggregation provides the following benefits:

  • Less complexity, simpler administration. Aggregations can simplify VLE configurations by reducing the number of IP addresses required to configure a VLE node, which also prevents drain on the customer address pool. Without link aggregation, more than twenty IP addresses can be required for a fully populated VLE node. Link aggregation can reduce the number of IP addresses to 2, 3, or 4 depending on whether the node has unique Replication, UUI, and/or remote VLE IP requirements.

  • Fault tolerance. With link aggregation, a link can fail and the traffic will switch to the remaining links, thus preventing an outage or job failure.

  • Load balancing and Bandwidth optimization. The load is balanced by distributing the load of both inbound and outbound traffic across all links in the aggregation. Using all links as one effectively increases bandwidth because traffic is spread evenly across the aggregated links. You can also increase effective bandwidth by increasing the number of links in the aggregation.

For examples of link aggregation, see Appendix B, "VLE Link Aggregation Examples".

Link Aggregation Requirements

  • All links in an aggregation must be the same speed. That is, you cannot configure a 1GigE and a 10GigE port in the same aggregation (the VLE GUI does not allow different port speeds in an aggregation).

  • The MTU (Maximum Transmission Unit) is configured for the entire aggregation by the Jumbo Frames check-box of the Port Card Configuration tab (checking this box sets the MTU (Maximum Transmission Unit) value to 9000 for the aggregation. The switch must support and have the MTU size enabled for all ports within the channel group of the switch.

  • An aggregation can consist of a maximum of eight links, which is enforced by the VLE GUI.

  • In a switched environment, the first switch from the VLE must support Link Aggregation Control Protocol (LACP) IEEE 802.3ad and be configured for the aggregation mode. The switch is probably a switch in the customer network and is typically administered by a customer network administrator, who will administer the VLE configuration. Ensure that you provide the details of the configuration to the administrator.

Switch Configuration

Note that the terms in the following sections vary between switch venders. The terms and discussion below are based on CISCO Ethernet switches. Oracle switch terminology is very similar can be found at:

http://docs.oracle.com/cd/E19934-01/html/E21709/z40016b9165586.html#scrolltoc

Channel Groups

A channel group is formed in the first switch that is directly connected to the VLE aggregation ports. Other switches or hops in the IP's path need not be aware of the existence of the aggregation The first switch is responsible for handling the traffic flow to and from the aggregation links. Each channel group is the logical grouping of an aggregation. A channel group is created for each aggregation and contains only the ports of the aggregation. The channel group ties the ports of an aggregation together so the switch can direct traffic to and from the aggregation. Because all ports connected to a channel group are known to be part of the aggregation do not connect ports to a channel group that are not part of the aggregation. Each channel group has parameters defined for the type of LACP and so forth, and contains the rules for the aggregation.

VLANs

A typical switch configuration can consist of several VLANs (Virtual LANs) that connect the VLE to the system components, such as at VTSS or another VLE. A VLAN is a logical grouping of ports in the switch that appear externally as its own isolated switch. The VLAN is typically comprised of one or more channel groups which were created for an aggregation along with the ports of the destination or target components such as the VTSS or another switch in a multi-hop environment.

Jumbo Frames

The MTU (Maximum Transmission Unit) is configured for the entire aggregation by the Jumbo Frames check box of the Port Card Configuration tab (checking this box sets the MTU (Maximum Transmission Unit) value to 9000 for the aggregation. If Jumbo frames are enabled then all switches between the VLE and its target components must have jumbo frames enabled as well as for all the ports of the VLAN.

LACP Mode

You can select one of the following LACP modes in the Aggregation Table of the Port Aggregations tab:

  • Off - Sometimes referred to as manual mode and indicates LACP datagrams (LACPDUs) are not sent. Off is the only valid mode without a switch. The non-switched configuration is only valid for VLE to VLE configurations. When using a switch with Off mode, LACP is not enabled in the channel group. The switch must be configured to support the aggregation.

  • Passive – In Passive mode, datagrams are only sent when the switch requests one.

  • Active – Datagrams are sent to the switch at regular intervals. The timer default of short is used with VLE and is not adjustable with the VLE GUI or CLI.

Policies

P3 is the default VLE policy and is not adjustable through the VLE GUI or CLI.

10GigE Port Aggregations

The 10GigE links can be aggregated for VLE to VTSS, UUI, or VLEto VLE connections. Because UUI traffic is minimal, 10GigE aggregations for UUI only have minimal benefit. 10GigE aggregations that include all three types of connections, however, can prove beneficial. Note that for VLE to VTSS configurations, the switch environment typically has both 10GigE and 1GigE connections. In these configurations, the 1GigE VLE ports connect to the switch's 1GigE ports and the VLE 10GigE ports connect to the switch's 10GigE ports. The 10GBE ports would be in a channel group and part of a VLAN that contains both the 1GBE and 10GBE ports.

Monitoring Aggregations

Ensure that you regularly monitor aggregations. If an aggregated link fails, VLE does not generate an ASR because the other links in the aggregation still function, so VLE does not detect the failed link. You cannot monitor the status of the individual links of the aggregation. To display the status of an aggregation, go to the Connectivity View - Port Status tab. panel of a VLE node.

Note that if a link goes down, an entry is logged in /var/adm/messages. The message file is part of the nightly bundle so the log can be scanned regularly for failed links. The message in the logs looks like the following example:

Sep 4 08:30:16 dvtvle3 mac: [ID 486395 kern.info] NOTICE: igb12 link down

Types of VLE Aggregations

VLE supports three types of connections, each of which can be aggregated as described in the following sections:

VLE to VTSS Aggregations

Best Practices

  • Configure a minimum of two aggregations for each VTSS to prevent a total outage if an aggregation fails.

  • You can connect multiple VTSSs to the same aggregations. For example, for a VSM5 you can connect IFF0 from each VTSS to one aggregation and connect IFF2 from each VTSS to a second aggregation and so forth. If you are using only two aggregations, then you can connect IFF0 and IFF1 from each VTSS to the first aggregation and so forth.

  • Configure links to an aggregation horizontally across the VLE (igb4, igb8, igb12, igb16) to prevent an outage to a aggregation if a network adapter fails.

For examples of VLE to VTSS link aggregation, see Appendix B, "VLE Link Aggregation Examples".

VLE to VLE Aggregations

You can aggregate VLE to VLE connections as follows:

  • Non-switched - In a non-switched configuration, the same interfaces from two VLEs form the connection. The non-switched environment works the same as the internal network of a two-node VLE without a switch. Non-switched environments are limited to point to point configurations only.

  • Switched - A switched configuration is similar to the configuration described in "VLE to VTSS Aggregations". A channel group is formed in the switch for each aggregation and both channel groups reside in the same VLAN.

    With multi-node VLE, a single aggregation from one node can be connected to multiple nodes of another VLE or multiple VLEs in a switched environment.

VLE UUI Aggregations

Typically, you use ports igb1 and igb2 to make UUI connections. In this configuration, aggregate igb1 and igb2 to create a fault-tolerant configuration: if one of the links fails, the remaining link still provides the UUI connection. For additional redundancy on multi-node VLEs, aggregate two UUI connections on a second node.