O_PillarAxiom_clr

Pillar Axiom MaxRep Replication for SAN

Document Number: E35263-01

Document Title: Customer Release Notes

Revision History

Rev Description

Rev Date

Effective Date

Release 2.0

2012-03-07


1 Terms and Conditions of Use

All systems are subject to the terms and conditions of the software licensing agreements and relevant copyright, patent, and trademark laws. Refer to those documents for more information.

2 Purpose

This document describes new features, capacities, configuration requirements, operating constraints, known issues and their workarounds, and other items for release 2.0 of Oracle’s Pillar Axiom MaxRep Replication for SAN. The document covers hardware, firmware, software, cabling, and documentation. The information provided is accurate at the time of printing. Newer information may be available from your Oracle authorized representative.

3 Product Release Information 2

Release 2.0 is a feature release of the software for the Pillar Axiom MaxRep Replication for SAN.

3.1 System Enhancements

This update provides enhancements and quality improvements to Pillar Axiom MaxRep Replication for SAN.

LUNs displayed on the Pillar Axiom MaxRep Replication for SAN graphical user interface (GUI) will include the Pillar Axiom LUN name as well as the Axiom LUID.

PCLI errors that occur during normal operations are visible in the GUI.

LUNs that are used for snapshots are listed by LUN name for efficient searches.

For the list of defects that this release resolves, see Table 1 Contact information.

3.1.1 Software Enhancements

Pillar Axiom MaxRep allows you to manage the source, target, home, or retention LUNs, including the ability to map, unmap, and detect whether a LUN was resized. You can also discover, format, mount, and extend filesystems of locally mounted LUNs using the graphical user interface.

Write splits from the Pillar Axiom Replication Engine are now identified by the Replication Engine IP address, not the GUID of the Replication Engine.

Pillar Axiom MaxRep Replication for SAN provides a daemon process that monitors the system for LUNs that are mounted as read-only (RO) and then attempts to remount these LUNs as read-write. If any LUNs are RO, the daemon process stops all MaxRep agents and tries to remount the read-only LUN as read-write (RW). After the LUN is mounted as RW, the daemon starts all agent processes and remounts all virtual snapshots (vSnaps), if any exist. If the RO-to-RW mount process fails, the daemon process attempts to mount the LUN every 5 minutes.

3.1.2 Replication Performance Enhancements

The default protection plan replication settings have been tuned for use with the Pillar Axiom system for better replication performance.

3.2 Changes to How the Pillar Axiom MaxRep Replication for SAN Operates

3.2.1 iSCSI Replication Support

Pillar Axiom MaxRep Replication for SAN provides support to replicate Pillar Axiom LUNs using iSCSI connectivity between the MaxRep Engine and the Pillar Axiom system.

In a mixed Fibre Channel (FC) and iSCSI environment, if the Fibre Channel (FC) data path is not available, the Slammer Write Splitter uses the iSCSI data paths. Pillar Axiom MaxRep Replication for SAN R2.00 supports replication of Pillar Axiom storage arrays that are FC only, iSCSI only, and FC iSCSI combination.

3.2.2 Importing Axiom SAN Hosts as MaxRep ACG

After Pillar Axiom MaxRep discovers and registers the Pillar Axiom system, the MaxRep GUI displays any connected SAN hosts as Access Control Groups (ACGs). You can use these ACGs to export your virtual snapshots (vSnaps). The Pillar Axiom MaxRep port configuration determines whether the vSnaps are exported over an FC or iSCSI network.

Pillar Axiom MaxRep allows you to view and delete imported ACGs. You can also view, edit, and delete the ACGs that are manually created. Imported SAN Host ACGs cannot be edited.

3.2.3 Multi-Hop Replication Support

Pillar Axiom MaxRep 2.0 supports true multi-hop replication. A full explanation of the multi-hop configuration is provided in the Pillar Axiom MaxRep Replication for SAN User’s Guide.

3.2.4 Heartbeat Monitoring

The Replication Engine registers Pillar Axiom systems using the Management Interface URL, which communicates directly with the Axiom Pilot. This communication link allows Pillar Axiom MaxRep to use a system heartbeat to collect Replication Engine status, coordinate active and standby Replication Engine logs, and generate exception alerts.

If a heartbeat to a Replication Engine is lost, the status of the Pillar Axiom system changes to a Warning state.

3.2.5 MaxRep Replication Agent Support

Pillar Axiom MaxRep supports host agents for a variety of operating systems, including Windows, Solaris, and Linux. An agent is also available for InMage Systems vContinuum running in a virtual environment.

3.2.6 Oracle Branding of the User Interface

The Pillar Axiom MaxRep GUI is branded with the Oracle logo and color scheme.

3.2.7 Improved Online Help System

The Pillar Axiom MaxRep interface contains context-sensitive help that provides relevant information about the currently displayed page. The help system is fully indexed and searchable. Each help topic contains links to related topics.

3.3 Pillar Axiom MaxRep Replication for SAN Software Update

3.3.1 Software Update Packages

Files include:

3.3.2 Release 2.00.04.00 Update Requirements

Pillar Axiom system must be on R5.3.1, or higher.

Pillar Axiom Replication Engines must be on R1.00.05, or higher.

3.3.3 Release 2.00.03.00 Update Process

Contact the Oracle support center to schedule an upgrade.

4 Support

Various levels of customer service are provided on a contract basis for Oracle’s Pillar Axiom MaxRep Replication for SAN systems. If you have purchased a service contract from Oracle or Pillar Data Systems, authorized support personnel will perform support and repair according to the terms and conditions of that agreement.

Table 1 Contact information

For help with…

Contact…

Support

https://support.oracle.com

Training

https://education.oracle.com

Documentation

  • Oracle Technical Network:

http://www.oracle.com/technetwork/indexes/documentation/index.html#storage

  • From the Pillar Axiom Storage Services Manager (GUI):

Support > Documentation

  • From Pillar Axiom HTTP access:

http://system-name-ip/documentation.php

where system-name-ip is the name or the public IP

address of your system

Contact Oracle

http://www.oracle.com/us/corporate/contact/index.html

4.1 Supported Hardware Components in a Pillar Axiom MaxRep Replication for SAN System

Only Oracle-supplied parts for Pillar Axiom systems are supported. Hardware that does not conform to Pillar Axiom specifications or is not an Oracle-supplied part voids the warranty and might compromise data integrity.

4.1.1 Pillar Axiom Hardware Requirements

The following are the requirements for a Pillar Axiom system to be used as a source or target replication array:

All source and target Pillar Axiom systems must be Pillar Axiom 500 or Pillar Axiom 600 and running Pillar Axiom Storage Services Manager release 5.3.1, or higher.

Brick capacity must be sized properly to account for the additional capacity required for the replication solution. The Brick spindle count must be sized properly to account for the performance requirements for the replication solution.

4.1.2 Pillar Axiom Replication Engine Hardware Requirements

The following are the hardware requirements for the MaxRep for SAN Engine:

4.2 Access to Pillar Axiom MaxRep Replication for SAN Systems

Administrators have access to certain features of the product based on their administrative role. The following table outlines the role types:

Table 2 MaxRep user interface administrator roles

Major UI areas

Administrator role

Monitor role

Protect Context

Full Access

No Access

Monitor Context

Full Access

Full Access

Recover Context

Full Access

Limited Access (Read only view)

Settings Context

Full Access

Limited Access

4.3 Download Software Updates

Prerequisites:

Before attempting to download firmware or system software, contact Oracle Pillar Customer Support and open a Service Request (SR) for a software update.

Note: When the Support Center has verified that your system meets the prerequisites for the update, you will be sent a password that enables you to download the Axiom software and the firmware or software update will be made available to you.

Have the password on hand before you download the software. This password is valid only for seven days.

  1. Tip: After signing in to My Oracle Support (MOS) in Step 1 below, you can view the current information about Pillar Axiom firmware and patches. To view this information, search the knowledge base for article 1424495.1. In the Search Knowledge Base field in the upper right corner of the screen, enter 1422199.1.

  2. When the software is available to you, point your browser to My Oracle Support (https://support.oracle.com/CSP/ui/flash.html) and sign in.

  3. On the top menu bar, click Patches & Updates.

  4. In the Patch Search frame, click Product or Family (Advanced).

  5. In the Product is list box, enter your system model.

Tip: As you begin entering characters, appropriate items appear in the dropdown list. Choose the model that corresponds to your system.

  1. In the Release is list box, click to expand the Pillar Axiom model, select Axiom software release, and then click Close.

  2. (Optional) For platform-dependent software such as Axiom Path Manager, in the list box to the right of Platform is, select the operating system appropriate for the client host.

  3. Click Search.

Results: The Patch Search Results window displays.

Note: Check the file size of the download and be sure your local system has sufficient space.

Important! If you intend to use this local system to stage the software, ensure this system has free capacity that is at least 2.2 times the size of the file download.

  1. (Optional) To view the patch release notes, in the Patch Search Results window, click Read Me.

  2. To download the software package, click Download.

  3. To begin the download, click the name of the software archive.

Results: A dialog opens requesting a password.

  1. Enter the password that Oracle Pillar Customer Support sent to you and then click Unlock.

  2. Browse to the location on your local system where you want to save the software update package.

Tip: Record this location for later use. You will need this information when you stage the software to the Pillar Axiom system.

  1. Extract the contents of the downloaded zip file.

Important! Be sure to preserve the original file names and extensions of the contents, because renaming might prevent successful staging.

4.4 Configuration Documentation

For information on the connectivity and interoperability of Pillar Axiom systems with various third-party software and hardware, see your Oracle Account Representative.

For information regarding the primary features of a Pillar Axiom MaxRep Replication system and how to configure them:

The Pillar Axiom Administrator’s Guide can be obtained in any one of the following ways:

All other documentation can be obtained from the Oracle Technical Network:

http://www.oracle.com/technetwork/indexes/documentation/index.html#storage

5 Pillar Axiom MaxRep Replication for SAN System Limits

This version of the Pillar Axiom MaxRep Replication for SAN system operates within the supported limits listed below.

Important! Use care when operating a system that has been configured to run at or near the system operating limits. The system may exhibit anomalies when all limits are exercised concurrently. Also, the time to start Pillar Axiom MaxRep Replication for SAN systems from a powered-off or shutdown state and the responsiveness of the GUI are extended under the following conditions:

Consult with Oracle OCS in North America (ACS elsewhere) to plan your Pillar Axiom MaxRep Replication for SAN system configuration prior to actual installation and configuration.

5.1 Pillar Axiom MaxRep Replication for SAN System Operating Limits

For detailed information on system limits, refer to the online help or to the Pillar Axiom MaxRep Replication for SAN User’s Guide PDF file (search for Ranges for Field Definitions).

5.1.1 Pillar Axiom MaxRep Replication for SAN Product Limits

The limits of the Pillar Axiom MaxRep Replication for SAN R2.0 are listed in the following table.

Table 3 Product limits for all Pillar Axiom MaxRep Replication for SAN systems

Specification

Limit

Replication Engines per Configuration

8

Replication Engines per Axiom

8

Axioms per Replication Engine

8

Replicated LUNs per Replication Engine (Synchronous)

120

Replicated LUNs per Replication Engine (Asynchronous)

120

Replicated LUNs per Replication Engine (Asynchronous Multi-hop)

120

LUNs per Protection Plan

120

Protection Plans per Replication Engine

Unlimited

Number of Retention LUNs per Replication Engine

255

Max Capacity of each Retention LUN

2.0TB

Daily Change Rate Limit per LUN per Replication Engine – FC

2.0TB per day

Daily Change Rate Limit per LUN per Replication Engine – iSCSI

1.2TB per day

Daily Change Rate Limit per Protection Plan per Replication Engine – FC

2.0TB per day

Daily Change Rate Limit per Protection Plan per Replication Engine – iSCSI

1.2TB per day

Daily Change Rate Limit per Replication Engine - FC

2.0TB per day

Daily Change Rate Limit per Replication Engine - iSCSI

1.2TB per day

Application Consistency Agents per Replication Engine

96

Virtual Snapshots per Replication Engine

2048

Physical Replication Copies per Replication Engine

255

5.1.2 Replication Engines for each Configuration

Limit of the number of Replication Engines will depend upon the configuration. Up to four process service engines have been tested in non-high availability (HA) solutions, and up to eight for HA solutions. Multi-hop supports up to three Replication Engines in non-HA and six Replication Engines in HA.

5.1.3 Replication Engines for each Axiom System

The maximum Replication Engines for each Pillar Axiom system is eight. Configure each Replication Engine in a separate Pillar Axiom MaxRep system.

5.1.4 Axioms for each Replication Engine

The maximum of Pillar Axiom systems for each Replication Engine is eight.

5.1.5 Replicated LUNs for each Replication Engine

The tested limit for the maximum number of replicated LUNs is 120. A theoretical limit of 240 LUNs can be achieved for synchronous and asynchronous replication. However, exceeding the 120 LUN tested limit may result in significant performance penalties.

5.1.6 LUNs for each Protection Plan

The tested limit for the number of LUNs per protection plan is 120. The theoretical limit is 250.

5.1.7 Protection Plans for each Replication Engine

If only volume replication is being performed, the practical limit is the same as the number of replicated LUNs for each Replication Engine because each protection plan will have at least one LUN. There is no logical limit within the Pillar Axiom MaxRep software.

5.1.8 Number of Retention LUNs for each Replication Engine

The tested limit of Retention LUNs for each Replication Engine is four. Practical limit is less than 10 because each Retention LUN reduces the available Replicated LUNs for that Replication Engine.

5.1.9 Max Capacity of each Retention LUN

Larger filesystems are possible by using a larger blocksize, however, serviceability may be an issue. The recommended limit is 2.0TB.

5.1.10 Daily Change Rate Limits

Daily change rate limits will vary depending upon block sizes and the write access patterns to the source LUN data, the performance capabilities of the target storage, and the available bandwidth between the source and target storage. Listed performance targets may not be met under conditions of highly random small block IO, or in cases where target storage or bandwidth availability cannot meet the demand of the solution.

5.1.11 Application Consistency Agents for each Replication Engine

The limit for Application Consistency agents registered to a Replication Engine is 96.

5.1.12 Virtual Snapshots for each Replication Engine

The 2048 virtual snapshot limit is a tested limit.

5.1.13 Physical Replication Copies for each Replication Engine

The number of physical copies mapped from the Pillar Axiom system will limit the number of replication LUNs available for that Replication Engine. A more practical limit would be 128.

5.2 Pillar Axiom System Operating Limits

Consult the Pillar Axiom 500 and 600 Customer Release Notes, Release 5.3 for the operating limit requirements.

6 System Requirements

6.1.1 Pillar Axiom Storage System Management (ASSM) Requirements

ASSM on a source or target Pillar Axiom system must be at release 05.03.00 or higher.

6.1.2 Browsers Requirements

Microsoft Internet Explorer 5.5 or later

Mozilla Firefox 3.5 or later

Screen resolution of 1024 x 768 pixels

Adobe Flash Player 10 or later

6.1.3 Network Requirements

Each Replication Engine that uses FC only connectivity to a primary or secondary Pillar Axiom system requires two Ethernet connections: One Gigabit Ethernet (1 GbE) RJ45 connection for management, and one 100BT RJ45 connection for console access by technical support. To support IP bonding for the management interface, one additional 1 GbE RJ45 Ethernet port is required.

Each Replication Engine that uses iSCSI only, or a combination of FC and iSCSI, connectivity to a primary or secondary Pillar Axiom system requires five Ethernet connections: Four 1 GbE RJ45 connections for management and replication data flow, and one 100BT RJ45 connection for console access by Customer Support. Optional IP bonding is not available for iSCSI-connected Replication Engines.

Connectivity between metro area sites for synchronous replication must include an extension of the local SAN fabric to the remote site using dense wavelength division multiplexing (DWDM) over dark fibre, which is the network system that consists of fibre optic cables between the primary and secondary locations. Sufficient bandwidth must be available to accommodate the change rate of the source data as well as the target Pillar Axiom system writes and journaling.

Connectivity between sites for remote asynchronous replication must include sufficient WAN bandwidth to accommodate the change rate of the source data.

6.1.4 Pillar Axiom Replication Engine Network Requirements

Each Replication Engine that uses FC connectivity to a primary or secondary Pillar Axiom system requires eight connections. These connections are 4 Gb/s (FC) and are provided through LC connectors on the back of the Replication Engine.

Known Issues

The Pillar Axiom MaxRep Replication for SAN issues listed in Table are known at the time of this release. They are planned for resolution in upcoming releases. When available, Oracle will provide updated software or hardware, as appropriate.

For additional information or help on any of the issues below, please contact your Oracle authorized representative (see Table 1 Contact information).

Table 7 Known Pillar Axiom MaxRep Replication for SAN issues

Issue

Workaround

Push installation of agents is not supported on Rhel6-U2 32/64bit or CentOS6-U2 32/64bit.

Manual installation of agents will be required for these Operating Systems. Refer to the Agent documentation for instructions for manual installation.

In certain conditions, iSCSI IQN ports are not reported after upgrade from R1.00.xx.

If iSCSI LUNs are to be protected, perform an Axiom rediscovery and iSCSI login manually after the upgrade.

  1. Log into the Pillar Axiom MaxRep Replication for SAN UI.

  2. From the Settings tab, choose Axioms > Manage Axioms.

  3. In the “Registered Axioms” table, locate the Axiom system to be discovered. In the Action column, select Re-Discover.

  4. In the Action column, select History to monitor the completion of the discovery process. When the discovery is complete, the status will indicate Success.

  5. From the Settings tab, choose Axioms > Toolkit for MaxRep.

  6. In the “Proceed With” table, select iSCSI Login and click Next.

  7. Select the Pillar Axiom system to perform the iSCSI login and click Submit.

  8. From the prompt, “Do you really want to Login to Axiom’s iSCSI portals?” click OK.
    The status of the Login displays as Pending.

  9. From the Settings tab, choose Axioms > Toolkit for MaxRep.

  10. In the “Proceed With” table, select iSCSI Login and click Show History.

  11. Continue to monitor the Login history for a Success status.

During upgrade from R1.00.xx, vSnaps which are exported over iSCSI will be unexported.

Manually export vSnaps after the upgrade.

After detecting a LUN resize from Toolkit for MaxRep, the updated information is not immediately available.

In some conditions, it may take up to 15 minutes for the updated size to be fully imported into the MaxRep Replication for SAN configuration.

Wait 15 minutes after detecting a LUN resize before making changes to a protection plan using that LUN.

Performing “Set Active” on a Standby MaxRep Engine requires additional steps.

After performing "Set Active" on a Standby MaxRep Engine, update the fabric agent services will need to be restarted.

Please contact Oracle Pillar Customer Support (see Table 1 Contact information) for completion of these steps.

If an Axiom is in discovery pending while upgrading from R1.00.xx, the Engine heartbeat to that Axiom fails to be enabled.

Prior to upgrading from R1.00.xx, check the status of each Axiom using the following process:

  1. Log into the Pillar Axiom MaxRep Replication for SAN UI.

  2. From the Settings tab, choose Axioms > Manage Axioms.

  3. In the Actions column of the Registered Axioms table, for each Axiom registered to the Replication Engine being upgraded, select History.

  4. Confirm that all Axiom Discovery processes are complete, and that none are in a pending state.

  5. If a discovery is in a pending state, allow time for that discovery to complete prior to performing the upgrade.

The MySQL service on the Pillar Axiom Replication Engine may fail to start after a power on reset of both the Pillar Axiom system and the Replication Engine.

In the event of performing a power on reset of the Pillar Axiom system and the Pillar Axiom Replication Engine:

  1. Power off the Pillar Axiom system and the Pillar Axiom Replication Engine using normal procedures.

  2. Remove power cables from the back of the Replication Engine.

  3. Power up the Axiom system.

  4. Ensure the Axiom system is back to a Normal status.

  5. Power on the Replication Engine by reinserting the power cables in the back of the Engine.

Replication pairs that are in “Resync” status during upgrade from R1.00.xx may remain stuck in “Resync” status after the upgrade.

This is due to the differences in how the data compare is performed during the Resync Phase I between R1.00.xx and R2.00.xx. This issue can be avoided by waiting for all protection plans to be in Differential sync status prior to upgrading from R1.00.xx.

After performing an upgrade to R2.00.xx, verify that all protection plans are in Differential Sync. This can be done as follows:

  1. Log into the Pillar Axiom MaxRep Replication for SAN UI

  2. From the Monitor tab, choose Protection Status > Volume Protection.

  3. The system displays all of the active protection plans.

  4. From the Volume Protection table, check the Status column for any protection plans listed as Resync Phase I.

  5. If any protection plans have a status of Resync Phase I, monitor the protection plan for progress. If the resync status is not progressing, restart the resync for all pairs in the protection plan. This can be performed as follows:

  6. Log into the Pillar Axiom MaxRep Replication for SAN UI.

  7. From the Protect tab, choose Axioms > Manage Protection Plan.

  8. From the Protection table, Action column, select Modify for the protection plan that needs to be resynchronized.

  9. Next to Restart Resync label, select Click Here.

  10. Select the pairs in the protection plan that need to be resynchronized, and select the Restart Resync button.

  11. The system displays a warning about the pausing the scheduled snapshots. Click OK.

Running a Retention LUN near full capacity may result in one or more protection plans remaining in Resync Phase II.

A portion of each retention LUN is set aside to prevent data integrity issues. By default, this free space is 20 GB. If the amount of free space in the retention LUN crosses this threshold, pairs in protection plans using that retention LUN may stall in Resync Phase II.

Several options are available to prevent this:

  • Provision additional retention LUNs to accommodate the additional space needed.

  • Balance retention data across existing retention LUNs.

  • Use sparse retention to retain older data without using CDP.

  • Modify the retention policy of larger protection plans to purge older retention logs if insufficient storage space is encountered. Note that this option might impact the ability of that protection plan to meet the configured retention policy.

In certain conditions, converting an HBA port from an Initiator to a Target port may take longer than expected.

Converting HBA ports from Initiator to Target ports is only performed during initial implementation. Wait for the conversion process to complete prior to configuring protection plans.

When a virtual snapshot is exported to a Windows host, Dummy_LUN_ZERO SCSI disk drives are seen.


Dummy LUN Zero will be seen on the initiator which is zoned with MaxRep Appliance Target ports. The number of these Dummy LUNs depends on the number of AT ports zoned with the number of initiators on the host. These LUNs can be ignored.

In a protection plan for 1-to-N configuration, the compression option is inactive.

In case of protection plans for 1-to-N system configurations, the compression option is automatic and based on the following criteria:

For primary scenario with synchronous pairs:

  • Default compression is disabled and the administrator cannot enable it.

  • For primary scenario with asynchronous pairs:

  • Default compression is enabled and the administrator can disable and enable it.

For any secondary scenario with synchronous pairs:

  • If the compression of the primary scenario is enabled, then the compression of the secondary scenario is also enabled and the administrator cannot disable it.

  • If the compression of the primary scenario is disabled, then the compression of the secondary scenario is also disabled and the administrator cannot enable it.

For any secondary scenario with asynchronous pairs:

  • If the compression of the primary scenario is disabled, then the compression of the secondary scenario is also disabled and the administrator can enable and disable it.

  • If the compression of the primary scenario is enabled, then the compression of the secondary scenario is also enabled and the administrator can enable and disable it.

If a source and target LUN in a replication pair need to be resized, a specific procedure must be followed to prevent the pair from going into a Resync Required status.

If source and target LUNs must be resized, resize the target LUN first. Resizing the source LUN first might cause the replication to fail because the system will attempt to replicate the data beyond the capacity of the target LUN.

To resize the replication pair use the following procedure:

  1. Follow standard Pillar Axiom procedures to resize the target LUN of the replication pair.

  2. Log into the Pillar Axiom MaxRep Replication for SAN UI

  3. From the Support tab, choose Axiom > Toolkit for MaxRep.

  4. From the Proceed With table, select Detect Resize, and then click Next.

  5. In the Select Axiom table, select the Axiom system that contains the target LUN for the replication pair to be resized.

  6. From the LUN navigation tree, select the target LUN of the replication pair. Browse through the LUN tree by selecting the [+] symbol next to the target Pillar Axiom Replication Engine and navigate through the list of replication pair LUNs. Click Next.

  7. Confirm that the correct LUN is selected, and then click Submit.

  8. Choose Axiom > Toolkit for MaxRep.

  9. From the Proceed With table, click the Show History.

  10. Locate the resized LUN and verify a status of Success.

  11. Continue to monitor for the success of the target LUN resize prior to continuing.

  12. Follow standard Pillar Axiom procedures to resize the source LUN of the replication pair.

  13. From the Pillar Axiom MaxRep Replication for SAN user interface Support tab, choose Axioms > Toolkit for MaxRep.

  14. In the Proceed With table, select Detect Resize and click Next.

  15. In the Select Axiom table, select the Axiom that contains the source LUN for the replication pair to be resized.

  16. From the LUN navigation tree, select the source LUN of the replication pair. Browse through the LUN tree by selecting the [+] symbol next to the source Pillar Axiom Replication Engine and navigate through the list of replication pair LUNs. Click Next.

  17. Confirm that the correct LUN is selected, and then click Submit.

  18. Choose Axiom > Toolkit for MaxRep.

  19. From the Proceed With table, click the Show History.

  20. Locate the resized LUN and verify a status of Success.

  21. Continue to monitor for the success of the source LUN resize prior to continuing.

The replication pair is successfully resized

Resync Required events are not forwarded to ARU by the Pillar Axiom storage array.

This will be resolved with the Pillar Axiom Storage System Management (ASSM) release 05.03.02.

Periodically check the Protection Health status from the Monitor tab in the Pillar Axiom MaxRep UI.

Under certain conditions, the Pillar Axiom overall health status will indicate a Warning state even though all subcomponents are healthy.

This can occur on large implementations under certain circumstances that may prevent the heartbeat from the Pillar Axiom Replication Engine from reaching the Pillar Axiom system in a timely fashion.

Selecting Ctl-Alt-R from the Pillar Axiom UI will refresh the overall status.



Resolved Issues

A number of issues, some previously undocumented, have been resolved in this release. Items that were documented as known issues in the previous release and are resolved with this release are described below. These items are no longer product issues.

Tip: For a complete listing of resolved issues, please contact Oracle Pillar Customer Support (see Table 1 Contact information).

Table 8 Resolved Pillar Axiom MaxRep Replication for SAN issues

Defect Number

Description

Bug 15878

Physical snapshots fail to get mounted

Bug 16127

Exchange 2007 consistency job failed.

Bug 16131

The vSnap is not mounted even though specified the mount point during configuration.

Bug 16296

VSS crashed

Bug 16628

Pair was deleted, but not reflected in Audit Log

Bug 17319

Bandwidth reports are not generated for primary appliance.

Bug 17527

UI issues during Resync Step-II.

Bug 17980

Incorrect warning message when deleting consistency jobs.

Bug 18252 / PDS69874

Resync Step 1 in MaxRep dashboard shows Resync Data in Transit (MB) in the Step 2 display area of the GUI.

Bug 18310

Oracle RAC- Reverse replication pairs struck at "Readiness check in Progress" state after Failover.

Bug 18424

During installation, bonded IP's are not listed at "The following master/free NICs were detected as active on this system:".

Bug 18499

ACG creation is not completed for iSCSI export after upgrade

Bug 18513

/home & /Retention are not being remounted as RW.

Bug 18555

Some ports are showing "0.0.0.0" during any action through "Toolkit for MaxRep".

Bug 18591

MaxRep toolkit is not listing the unused mapped and mounted LUNs for resize

Bug 18618

Installation of MaxRep build is stuck at "Executing the mkinitrd image now..."

Bug 18631

After Axiom rediscovery is modified, LUNs are not updated in UI.

Bug 18637

Prepare target failed for reverse replication pair.

Bug 18651

Axiom is not deregistered with the uninstalltion of secondary appliance.

Bug 18652

Axiom deregister does not cleanup the nports table.

Bug 18679

iSCSI log-out sessions need to be suppressed at the time of uninstallation

Bug 18778

Though resize is not detected by the Engine, it shows a successful status in "Policy History of Detect resize"

Bug 18797

Un-installation exits on second pass if first pass was struck due to certain issues like N/W

Bug 18798

Health issues are not properly updated for multiple pairs in a single plan.

Bug 18801

Consistency jobs are not created after upgrade

Bug 18803

Rollback stuck at "in progress"

Bug 18812

Uninstall script aborting while CS is not reachable.

Bug 18856

Replication not progressing, data protection not being launched

Bug 18892

Replication Engine Ports Configuration is not allowed to configure ISCSI-iqn port after upgrade.

Bug 18941

Fabric agent crashed after source Appliance failover.

Bug 19064

Corrections required to read-only-mount.log

Bug 19067

Resync set to YES for Async Pairs after upgrade.

Bug 19082

Same Vsnap export which is exported over ISCSI or FC.

Bug 19104

iSCSI Login and Replication Appliance Heart Beat policies should be inserted for already registered Axioms during upgrade

Bug 19106

Pairs are not inserted after prepare target passed.

Bug 19112

Mapped LUNs are not listed under "unmap" list.

Bug 19141

While taking the physical snapshot "/var/crash" is listed under suggested drives.

Bug 19144

After Upgrade, 'readonlychecksd' service is not running.

Bug 19223

User should not be allowed to export the vsnap to different initiators at the same time (both over FC and ISCSI)

Bug 19249

Ports disappeared after moving the port to ai_for_target table from ai port table

Bug 19251

While mapping the LUNs from Toolkit and not providing a mountpoint, it will take mountpoint of another LUN.

Bug 19254

After upgrade, 'port type' entry is empty for some FC ports.

Bug 19256

Replication engine entries are still displayed in axiom UI after uninstallation of MaxRep engine.

Bug 19257

Label change to "Delete Rreplication" from "Stop Replication"

Bug 19266

Resync operation not performed after source LUN resizing given restart resync.

Bug 19278

Cluster ownership not taken by Active node.

Bug 19280

/home unmounted after Axiom reboot.

Bug 19332

Remove "Fabric name" field in recovery page.

Bug 19353

vSnap related pop ups should be specific

Bug 19375

Axiom Derigister operation does not update the "registration status".

Bug 19390

readonlychecksd is not getting started after upgrading the engine

Bug 19406

Remove "mount point is not specified" message from UI in recovery page.

Bug 19416

Not able to collect the logs using either CLI or UI.

Bug 19505 / PDS71307

After Active node failover on MaxRep HA appliance setup, the retention LUNs are not being mounted to either replication engine. Replication is stuck in "LUN Reconfiguration Pending" state.

Bug 19585

Certain snapshots not being listed under "Monitor scheduled snapshots" column.

Bug 19609

Vsnaps are not being unexported (unexport operation is not honored)

Bug 19622

"Custom Bandwidth" report redirects to Network traffic rates when "Complete Host Report" option is selected.

Bug 19623

+vCon Resync required set to YES for the Failback Protections.

Bug 19649

VX Services need to be started if upgrade fails

Bug 19651

Upgrade failed due to vsnaps exported over iSCSI.

Bug 19652

Upgrade failure occurred during starting of the iSCSI target service.

Bug 19653

Exported vsnaps are deleted from the UI after upgrade.

Bug 19654

After Upgrade, iSCSI iqn ports are not reported.

Bug 19656

While moving Cursor over LUN, cursor tip format is displayed wrong during configuring replication pairs.

Bug 19657

For certain imported ACGs, when clicked on "view", "Access Control Ports" is showing empty

Bug 19659

Axiom Agent upgrade is prompting mandatory rebooted in order to complete upgrade

Bug 19660

vSnaps exported over iscsi wer unexported after upgrade.

Bug 19662

Pairs struck at "Configuring Lun protection" state.

Bug 19662

Pairs stuck at "Configuring LUN protection" state.

Bug 19666

Protection direction showing extra "()" [ "MAXREPCOS57 >MAXREP COS139()"] on 'Activation status' page.

Bug 19667

Agent not responding after activating the remote Rep Engine

Bug 19668

Recovery vsnapshot struck at 0% after activating remote rep engine

Bug 19669

After upgrade, all pairs are stuck at "Reconfiguring LUN upon Reboot/Failover/Resize" state.

Bug 19673

NIC configuration does not update the new iqn & IP address for AT port

Bug 19695

Pairs which are deactivated and activated again after upgrade are stuck at Resync I.

Bug 19696

Resync flag set to Yes after restart resync for LUN resize.

Bug 19699

Time interval of Fx job configuration does not match consistency policy.

Bug 19720

After rebooting the secondary PS, all of the pairs with regards to secondary PS stuck at "Reconfiguring LUN upon Reboot/Failover/Resize"

Bug 19723

RPO is not increasing if pairs are stuck at resync-II with data changes from resync-I.

Bug 19726

File system is not extended for the Home & Retention LUNs when detect resize performed from the Toolkit.

Bug 19742 / PDS71450

After upgrading MaxRep Engine, GUI issue seen

Bug 19743 / PDS71451

"iSCSI Port Registration Failed" alert is incorrect.

Bug 19744 / PDS71452

iSCSI displayed incorrectly in MaxRep GUI

Bug 19746 / PDS71456

After deselecting the export checkbox, still able to export the backup scenario.

Bug 19747 / PDS71455

Not able to run backup scenario after upgrade

Bug 19760 / PDS71331

Axioms attached as part of MaxRep HA setup are showing intermittent heartbeat loss.

Bug 19770 / PDS71477

"Hourly data rate changes" graph not displayed after performing upgrade

Bug 19771 / PDS71483

Incorrect warning message after passing duplicate Trap Listener IP

Bug 19773 / PDS71484

Update mismatch alert seen when MaxRep engines were upgraded

Bug 19786

Consistency policy configuration page is not opening after click on "Add consistency" button.

Bug 19792

Errors in sparse policy display in Review page of protection plan

Bug 19794

Wrong pop-up message displayed after configuring retention policy

Bug 19795

Able to activate consistency policy even without replication pairs

Bug 19798

Need to improve formatting of rows/columns in the review page of plan with sparse protection

Bug 19800

Protection type is not consistent across pages

Bug 19803

DB sync jobs are not triggered once it fails with exit code '23'.

Bug 19805

Consistency policy names are not listed in the monitor page after reactivating the consistency policies.

Bug 19808 / PDS71500

Data log Path in vsnap creation is displayed incorrectly

Bug 19810 / PDS71505

Audit logs are stating an Axiom is registered after Axiom registration failed.

Bug 19819 / PDS71542

Replication Options values remained unchanged after upgrade

Bug 19837 / PDS71548

During deactivation of plan protection, status showing 122

Bug 19838 / PDS71552

Rollback stuck in Target under Rollback.

Bug 19839 / PDS71553

Reporting issues seen after making Remote engine active.

Bug 19840 / PDS71557

MaxRep engine heartbeat no longer occurs after releasing and then re-applying MaxRep license on either the secondary or the primary Axioms. Heartbeat never returns.

Bug 19841 / PDS71551

After performing deactivation & activation multiple times, pairs stuck in Configuration LUN protection

Bug 19850

Used Retention LUNs are listed under Unmap list.

Bug 19867 / PDS71565

Add button for adding initiator iSCSI port for ACG creation does not work in Internet Explorer

Bug 19868 / PDS71572

Engine showing both the Nodes active when one node is in shutdown state.

Bug 19893

Incorrect port name is shown in SanHost ACG.

Bug 19900 / PDS71579

After Deregistered Axioms from the MaxRep HA cluster, Engine still seen in the Axiom.

Bug 19902 / PDS71609

Replication pairs are stuck at 0% of Resync I following power cycle of host and engine.

Bug 19903 / PDS71611

Plan stuck in "replication pending".

Bug 19922

Old toolkit map Show History is being overwritten

Bug 19929 / PDS71632

"Cumulative retention space usage" graph is blank

Bug 19930 / PDS71634

"Protection Differential sync reached" alerts showing as warning.

Bug 19931

Agent Heartbeat showing as red even though services are running fine.

Bug 19940 / PDS71645

At GUI page "Protect -> Manage Protected Disks/Volumes" the box title color for "Cleanup Replication Options" should match the tab color for "Protect"

Bug 19965

Export snapshot table name showing as "Export snapshot as FC" instead of "Export snapshot as FC/iSCSI"

Bug 19969 / PDS71661

No MaxRep errors occur with Axiom powered down - Source LUN should be shown to be inaccessible.

Bug 19970 / PDS71669

Call-home logs are no longer being generated for the RPO threshold exceeded alert.

Bug 19988

Converting ports stuck at Transient Pending

Bug 20030 / PDS71716

Mounted Retention LUNs are not getting resized from MaxRep Toolkit

Bug 20031 / PDS71727

Export failed when the ACG information listed the Access Control Ports with capital letters

Bug 20044

LUNs which are used as target LUNs are listed under lunmap list.

Bug 20054 / PDS71748

Unmap stuck in pending.

Bug 20080

Introduction of trap time interval in the SNMP trap settings

Bug 20406 / PDS71976

Axiom goes to warning state due to heartbeat loss when 100 pairs resync simultaneously

Bug 20506 / PDS72018

Axiom registration is failing.

Bug 20509 / PDS72060

Pairs not preceding & stuck in configuration LUN protection.

Bug 20589

Spelling mistake in log file name collected from UI.

Bug 20719

App service utilized 100% of the CPU.

Bug 20764

Resync flag is not being set to yes even though protected source LUN was deleted from the Axiom

Bug 20765

Log rotation is not happening for few of the logs.

Bug 20901

Resync flag set to "yes" after changing the source LUN name

Bug 20922


False alert messages indicating low free space are shown in host logs due to stale files in retention folder.

Bug 20993

Not getting Traps for CS Node Failover.

PDS68024

Target port conversion stuck in Transient Pending

PDS68299

DB sync job fails after Axiom NDU due to /home being read only

PDS68880

UI doesn’t update the latest rollback success status.

PDS69236

Update the online help for unsupported traps

PDS69298

Uninstallation was successful however the re-installation of the license caused continued call home logs to be created stating license file was missing.

PDS69527

"No volume available on secondary server for storing retention logs" warning message seen for new pairs after starting 200 pairs successfully.

PDS69638

Pairs stuck in [Process Service/Target cleanup pending] after primary engine reboot.

PDS69702

Able to delete the registered axiom while pairs were progressing.

PDS69782

In HA setup, export options are given in Rollback Scenario.

PDS69890

GUI response time is very slow

PDS69918

Getting error while creating consistency tag for oracle LUNs.

PDS69943

"HA_DB_SYNC" - "Table './svsdb1/frbStatus' is marked as crashed and should be repaired"

PDS70221

MaxRep Dashboard is displaying the same Rollback Scenario Status twice on the "Manage Backup/Rollback Scenarios".

PDS70298

"EXT3-fs warning (device dm-37): ext3_dx_add_entry: Directory index full!"

PDS70299

On a Protection Plan, if a source LUN is removed, the GUI will display a gap with the remaining associations and LUN sizes are wrong

PDS70369

Cannot choose a target LUN with MaxRep on Asynch/IP protocol

PDS70459

Data held on the target cache & engine pushing the data on the Target LUNs slowly

PDS70506

Sync pairs stuck in Resyncing (Step II)

PDS70556

Installation does not show the creation of eth3 file in ifaces directory

PDS70559

When configuring iSCSI NICs during installation, eth3 is assigned as the default instead of eth0

PDS70581

Users are unable to un-do converted ports if HBA port configuration is created incorrectly.

PDS70595

If registering an axiom and using an incorrect IP address, the MaxRep engine does not allow user to halt registration and MaxRep engine takes an excessive amount of time before the Axiom registration fails

PDS70627

"Create Recovery Snapshots" page no longer displays items that were previously displayed correctly

PDS70702

GUI displays an "iSCSI Port Registration" alert for Axiom even though Axiom doesn't have any iSCSI ports

PDS70822

MaxRep: Unable to start mysqld service after moving /home to a storage LUN

PDS70840

MaxRep issue - SAS Controller not initialized during POST after power issue

PDS70852

Multiple defects related to pausing / stopping host based replication pairs

PDS70995

"Settings -> Axioms -> Toolkit for MaxRep -> Policy History for LUN Mapping", an incorrect request to map to devices still shows "In Progress" and never fails.

Additional Notes

For items in this section that refer to inserting and/or removing field replaceable units (FRUs), please refer to the Pillar Axiom Service Guide for more information.

For items in this section that refer to provisioning or configuring a Pillar Axiom system, please refer to the Pillar Axiom Administrators Guide for more information.

7 Technical Documentation Errata

The following sections describe topics in the technical documentation that were not able to be corrected in time for the current release of the Pillar Axiom MaxRep Replication for SAN.

Select the IP address of the Replication Engine that will serve as the Control Service Replication Engine for this Pillar Axiom system.

8 Online Help Known Issues

8.1.1 Navigation pane

(Internet Explorer and Firefox) The left navigation pane width is fixed and cannot be adjusted to view the text of long text entries.

8.1.2 Full text search

(Firefox) Search results text highlighting is disabled.





2 Throughout this document, all references to release 2.x apply to the Pillar Axiom MaxRep Replication for SAN product.