Pillar Axiom MaxRep Replication for SAN |
Document Number: E35263-02 |
Document Title: Customer Release Notes |
Revision History
Rev Description |
Rev Date |
Effective Date |
Release 2.0 |
2015-09-15 |
2015-09-15 |
All systems are subject to the terms and conditions of the software licensing agreements and relevant copyright, patent, and trademark laws. Refer to those documents for more information.
This document describes new features, capacities, configuration requirements, operating constraints, known issues and their workarounds, and other items for release 2.0 of Oracle’s Pillar Axiom MaxRep Replication for SAN. The document covers hardware, firmware, software, cabling, and documentation. The information provided is accurate at the time of printing. Newer information may be available from your Oracle authorized representative.
Release 2.0 is a feature release of the software for the Pillar Axiom MaxRep Replication for SAN.
This update provides enhancements and quality improvements to Pillar Axiom MaxRep Replication for SAN.
Display LUN Name with LUN LUID
LUNs displayed on the Pillar Axiom MaxRep Replication for SAN graphical user interface (GUI) will include the Pillar Axiom LUN name as well as the Axiom LUID.
Reporting Plan Status
PCLI errors that occur during normal operations are visible in the GUI.
Snapshot Device Filtering
LUNs that are used for snapshots are listed by LUN name for efficient searches.
For the list of defects that this release resolves, see Table 1 Contact information.
LUN Management Enhancements
Pillar Axiom MaxRep allows you to manage the source, target, home, or retention LUNs, including the ability to map, unmap, and detect whether a LUN was resized. You can also discover, format, mount, and extend filesystems of locally mounted LUNs using the graphical user interface.
Write Split Information
Write splits from the Pillar Axiom Replication Engine are now identified by the Replication Engine IP address, not the GUID of the Replication Engine.
Automatic Correction of Read-only Mounts
Pillar Axiom MaxRep Replication for SAN provides a daemon process that monitors the system for LUNs that are mounted as read-only (RO) and then attempts to remount these LUNs as read-write. If any LUNs are RO, the daemon process stops all MaxRep agents and tries to remount the read-only LUN as read-write (RW). After the LUN is mounted as RW, the daemon starts all agent processes and remounts all virtual snapshots (vSnaps), if any exist. If the RO-to-RW mount process fails, the daemon process attempts to mount the LUN every 5 minutes.
The default protection plan replication settings have been tuned for use with the Pillar Axiom system for better replication performance.
Pillar Axiom MaxRep Replication for SAN provides support to replicate Pillar Axiom LUNs using iSCSI connectivity between the MaxRep Engine and the Pillar Axiom system.
In a mixed Fibre Channel (FC) and iSCSI environment, if the Fibre Channel (FC) data path is not available, the Slammer Write Splitter uses the iSCSI data paths. Pillar Axiom MaxRep Replication for SAN R2.00 supports replication of Pillar Axiom storage arrays that are FC only, iSCSI only, and FC iSCSI combination.
After Pillar Axiom MaxRep discovers and registers the Pillar Axiom system, the MaxRep GUI displays any connected SAN hosts as Access Control Groups (ACGs). You can use these ACGs to export your virtual snapshots (vSnaps). The Pillar Axiom MaxRep port configuration determines whether the vSnaps are exported over an FC or iSCSI network.
Pillar Axiom MaxRep allows you to view and delete imported ACGs. You can also view, edit, and delete the ACGs that are manually created. Imported SAN Host ACGs cannot be edited.
Pillar Axiom MaxRep 2.0 supports true multi-hop replication. A full explanation of the multi-hop configuration is provided in the Pillar Axiom MaxRep Replication for SAN User’s Guide.
The Replication Engine registers Pillar Axiom systems using the Management Interface URL, which communicates directly with the Axiom Pilot. This communication link allows Pillar Axiom MaxRep to use a system heartbeat to collect Replication Engine status, coordinate active and standby Replication Engine logs, and generate exception alerts.
If a heartbeat to a Replication Engine is lost, the status of the Pillar Axiom system changes to a Warning state.
Pillar Axiom MaxRep supports host agents for a variety of operating systems, including Windows, Solaris, and Linux. An agent is also available for InMage Systems vContinuum running in a virtual environment.
The Pillar Axiom MaxRep GUI is branded with the Oracle logo and color scheme.
The Pillar Axiom MaxRep interface contains context-sensitive help that provides relevant information about the currently displayed page. The help system is fully indexed and searchable. Each help topic contains links to related topics.
Files include:
AxiomONE-MaxRep_ReplicationEngine_2.00.01.00_RHEL5U5-64_GA_27Dec2011_release.tar.gz
AxiomONE-MaxRep_ReplicationEngine_2.00.04.00_GA_4_21011_04APR12.tar.gz
md5sum.txt
MaxRep_Customer_Release_Notes_R02_00.pdf
Pillar Axiom system must be on R5.3.1, or higher.
Pillar Axiom Replication Engines must be on R1.00.05, or higher.
Contact the Oracle support center to schedule an upgrade.
Various levels of customer service are provided on a contract basis for Oracle’s Pillar Axiom MaxRep Replication for SAN systems. If you have purchased a service contract from Oracle or Pillar Data Systems, authorized support personnel will perform support and repair according to the terms and conditions of that agreement.
For help with… |
Contact… |
Support |
|
Training |
|
Documentation |
http://www.oracle.com/pls/topic/lookup?ctx=pillardocs
Support > Documentation
http://system-name-ip/documentation.php where system-name-ip is the name or the public IP address of your system |
Contact Oracle |
Only Oracle-supplied parts for Pillar Axiom systems are supported. Hardware that does not conform to Pillar Axiom specifications or is not an Oracle-supplied part voids the warranty and might compromise data integrity.
The following are the requirements for a Pillar Axiom system to be used as a source or target replication array:
All source and target Pillar Axiom systems must be Pillar Axiom 500 or Pillar Axiom 600 and running Pillar Axiom Storage Services Manager release 5.3.1, or higher.
For Fibre Channel (FC) only: The Pillar Axiom system must have FC SAN fabric connectivity.
For iSCSI only: The Pillar Axiom system must have iSCSI connectivity.
For FC and iSCSI: The Pillar Axiom system must support both FC SAN fabric and iSCSI connectivity.
Brick capacity must be sized properly to account for the additional capacity required for the replication solution. The Brick spindle count must be sized properly to account for the performance requirements for the replication solution.
The following are the hardware requirements for the MaxRep for SAN Engine:
For Fibre Channel (FC) support: The Replication Engine must have FC SAN fabric connectivity.
For iSCSI support: The Replication Engine must have Ethernet LAN connectivity.
For Data Change Rates of more than 2 TB/day for FC and up to 1.2 TB/day for iSCSI, require multiple Replication Engines. Performance targets may vary depending upon the data replicated, the host access patterns to that data, and the SAN, network, and target storage resources available to the Replication Engines included in the solution.
Up to eight Replication Engines can be registered to a single Pillar Axiom system.
All communications to primary or secondary Pillar Axiom Pilots use a 1GB Ethernet port.
Administrators have access to certain features of the product based on their administrative role. The following table outlines the role types:
Table 2 MaxRep user interface administrator roles
Major UI areas |
Administrator role |
Monitor role |
Protect Context |
Full Access |
No Access |
Monitor Context |
Full Access |
Full Access |
Recover Context |
Full Access |
Limited Access (Read only view) |
Settings Context |
Full Access |
Limited Access |
Prerequisites:
Before attempting to download firmware or system software, contact Oracle Pillar Customer Support and open a Service Request (SR) for a software update.
Note: When the Support Center has verified that your system meets the prerequisites for the update, you will be sent a password that enables you to download the Axiom software and the firmware or software update will be made available to you.
Have the password on hand before you download the software. This password is valid only for seven days.
Tip: After signing in to My Oracle Support (MOS) in Step 1 below, you can view the current information about Pillar Axiom firmware and patches. To view this information, search the knowledge base for article 1424495.1. In the Search Knowledge Base field in the upper right corner of the screen, enter 1422199.1.
When the software is available to you, point your browser to My Oracle Support (https://support.oracle.com/CSP/ui/flash.html) and sign in.
On the top menu bar, click Patches & Updates.
In the Patch Search frame, click Product or Family (Advanced).
In the Product is list box, enter your system model.
Tip: As you begin entering characters, appropriate items appear in the dropdown list. Choose the model that corresponds to your system.
In the Release is list box, click to expand the Pillar Axiom model, select Axiom software release, and then click Close.
(Optional) For platform-dependent software such as Axiom Path Manager, in the list box to the right of Platform is, select the operating system appropriate for the client host.
Click Search.
Results: The Patch Search Results window displays.
Note: Check the file size of the download and be sure your local system has sufficient space.
Important! If you intend to use this local system to stage the software, ensure this system has free capacity that is at least 2.2 times the size of the file download.
(Optional) To view the patch release notes, in the Patch Search Results window, click Read Me.
To download the software package, click Download.
To begin the download, click the name of the software archive.
Results: A dialog opens requesting a password.
Enter the password that Oracle Pillar Customer Support sent to you and then click Unlock.
Browse to the location on your local system where you want to save the software update package.
Tip: Record this location for later use. You will need this information when you stage the software to the Pillar Axiom system.
Extract the contents of the downloaded zip file.
Important! Be sure to preserve the original file names and extensions of the contents, because renaming might prevent successful staging.
For information on the connectivity and interoperability of Pillar Axiom systems with various third-party software and hardware, see your Oracle Account Representative.
For information regarding the primary features of a Pillar Axiom MaxRep Replication system and how to configure them:
Navigate through the Pillar Axiom Storage Services Manager GUI.
Read the Pillar Axiom Administrator’s Guide PDF.
Read the Pillar Axiom MaxRep Replication for SAN User’s Guide PDF.
Read the Pillar Axiom MaxRep Replication for SAN Hardware Guide PDF.
Read the online help in the Pillar Axiom MaxRep for Replication SAN GUI.
The Pillar Axiom Administrator’s Guide can be obtained in any one of the following ways:
Point your browser to http://system-name-IP/documentation.php, where system-name-IP is the name or the public IP address of your system.
In the Pillar Axiom Storage Services Manager GUI, navigate to Support > Documents.
All other documentation can be obtained from the Oracle Technical Network:
http://www.oracle.com/technetwork/indexes/documentation/index.html#storage
This version of the Pillar Axiom MaxRep Replication for SAN system operates within the supported limits listed below.
Important! Use care when operating a system that has been configured to run at or near the system operating limits. The system may exhibit anomalies when all limits are exercised concurrently. Also, the time to start Pillar Axiom MaxRep Replication for SAN systems from a powered-off or shutdown state and the responsiveness of the GUI are extended under the following conditions:
You configure a system near one or more of its limits.
You increase the number of customer-defined system objects, such as protection plans, LUNs, clones, and so on.
Consult with Oracle OCS in North America (ACS elsewhere) to plan your Pillar Axiom MaxRep Replication for SAN system configuration prior to actual installation and configuration.
For detailed information on system limits, refer to the online help or to the Pillar Axiom MaxRep Replication for SAN User’s Guide PDF file (search for Ranges for Field Definitions).
The limits of the Pillar Axiom MaxRep Replication for SAN R2.0 are listed in the following table.
Table 3 Product limits for all Pillar Axiom MaxRep Replication for SAN systems
Specification |
Limit |
Replication Engines per Configuration |
8 |
Replication Engines per Axiom |
8 |
Axioms per Replication Engine |
8 |
Replicated LUNs per Replication Engine (Synchronous) |
120 |
Replicated LUNs per Replication Engine (Asynchronous) |
120 |
Replicated LUNs per Replication Engine (Asynchronous Multi-hop) |
120 |
LUNs per Protection Plan |
120 |
Protection Plans per Replication Engine |
Unlimited |
Number of Retention LUNs per Replication Engine |
255 |
Max Capacity of each Retention LUN |
2.0TB |
Daily Change Rate Limit per LUN per Replication Engine – FC |
2.0TB per day |
Daily Change Rate Limit per LUN per Replication Engine – iSCSI |
1.2TB per day |
Daily Change Rate Limit per Protection Plan per Replication Engine – FC |
2.0TB per day |
Daily Change Rate Limit per Protection Plan per Replication Engine – iSCSI |
1.2TB per day |
Daily Change Rate Limit per Replication Engine - FC |
2.0TB per day |
Daily Change Rate Limit per Replication Engine - iSCSI |
1.2TB per day |
Application Consistency Agents per Replication Engine |
96 |
Virtual Snapshots per Replication Engine |
2048 |
Physical Replication Copies per Replication Engine |
255 |
Limit of the number of Replication Engines will depend upon the configuration. Up to four process service engines have been tested in non-high availability (HA) solutions, and up to eight for HA solutions. Multi-hop supports up to three Replication Engines in non-HA and six Replication Engines in HA.
The maximum Replication Engines for each Pillar Axiom system is eight. Configure each Replication Engine in a separate Pillar Axiom MaxRep system.
The maximum of Pillar Axiom systems for each Replication Engine is eight.
The tested limit for the maximum number of replicated LUNs is 120. A theoretical limit of 240 LUNs can be achieved for synchronous and asynchronous replication. However, exceeding the 120 LUN tested limit may result in significant performance penalties.
The tested limit for the number of LUNs per protection plan is 120. The theoretical limit is 250.
If only volume replication is being performed, the practical limit is the same as the number of replicated LUNs for each Replication Engine because each protection plan will have at least one LUN. There is no logical limit within the Pillar Axiom MaxRep software.
The tested limit of Retention LUNs for each Replication Engine is four. Practical limit is less than 10 because each Retention LUN reduces the available Replicated LUNs for that Replication Engine.
Larger filesystems are possible by using a larger blocksize, however, serviceability may be an issue. The recommended limit is 2.0TB.
Daily change rate limits will vary depending upon block sizes and the write access patterns to the source LUN data, the performance capabilities of the target storage, and the available bandwidth between the source and target storage. Listed performance targets may not be met under conditions of highly random small block IO, or in cases where target storage or bandwidth availability cannot meet the demand of the solution.
The limit for Application Consistency agents registered to a Replication Engine is 96.
The 2048 virtual snapshot limit is a tested limit.
The number of physical copies mapped from the Pillar Axiom system will limit the number of replication LUNs available for that Replication Engine. A more practical limit would be 128.
Consult the Pillar Axiom 500 and 600 Customer Release Notes, Release 5.3 for the operating limit requirements.
ASSM on a source or target Pillar Axiom system must be at release 05.03.00 or higher.
Microsoft Internet Explorer 5.5 or later
Mozilla Firefox v24 through v31
Chrome v22.0.0.1229.0
Screen resolution of 1024 x 768 pixels
Adobe Flash Player 10 or later
Ethernet Ports
Each Replication Engine that uses FC only connectivity to a primary or secondary Pillar Axiom system requires two Ethernet connections: One Gigabit Ethernet (1 GbE) RJ45 connection for management, and one 100BT RJ45 connection for console access by technical support. To support IP bonding for the management interface, one additional 1 GbE RJ45 Ethernet port is required.
Each Replication Engine that uses iSCSI only, or a combination of FC and iSCSI, connectivity to a primary or secondary Pillar Axiom system requires five Ethernet connections: Four 1 GbE RJ45 connections for management and replication data flow, and one 100BT RJ45 connection for console access by Customer Support. Optional IP bonding is not available for iSCSI-connected Replication Engines.
Environment
Connectivity between metro area sites for synchronous replication must include an extension of the local SAN fabric to the remote site using dense wavelength division multiplexing (DWDM) over dark fibre, which is the network system that consists of fibre optic cables between the primary and secondary locations. Sufficient bandwidth must be available to accommodate the change rate of the source data as well as the target Pillar Axiom system writes and journaling.
Connectivity between sites for remote asynchronous replication must include sufficient WAN bandwidth to accommodate the change rate of the source data.
SAN Ports
Each Replication Engine that uses FC connectivity to a primary or secondary Pillar Axiom system requires eight connections. These connections are 4 Gb/s (FC) and are provided through LC connectors on the back of the Replication Engine.
The Pillar Axiom MaxRep Replication for SAN issues listed in Table are known at the time of this release. They are planned for resolution in upcoming releases. When available, Oracle will provide updated software or hardware, as appropriate.
For additional information or help on any of the issues below, please contact your Oracle authorized representative (see Table 1 Contact information).
Table 7 Known Pillar Axiom MaxRep Replication for SAN issues
Issue |
Workaround |
When creating a new protection
plan in MaxRep 2.00.05, certain browsers might not produce
results as expected. After selecting the primary Pillar Axiom and a primary LUN and then clicking Next, a pop-up prompt displays the message, Select Primary Replication Engine. When the pop-up displays, you cannot make the selection and continue to the next step when using Mozilla Firefox v33.1.1 or Chrome 36.0.1985.143. |
When creating a new protection
plan in MaxRep 2.00.05, the following browser versions can be
used:
|
Push installation of agents is not supported on Rhel6-U2 32/64bit or CentOS6-U2 32/64bit. |
Manual installation of agents will be required for these Operating Systems. Refer to the Agent documentation for instructions for manual installation. |
In certain conditions, iSCSI IQN ports are not reported after upgrade from R1.00.xx. |
If iSCSI LUNs are to be protected, perform an Axiom rediscovery and iSCSI login manually after the upgrade.
|
During upgrade from R1.00.xx, vSnaps which are exported over iSCSI will be unexported. |
Manually export vSnaps after the upgrade. |
After detecting a LUN resize from Toolkit for MaxRep, the updated information is not immediately available. |
In some conditions, it may take up to 15 minutes for the updated size to be fully imported into the MaxRep Replication for SAN configuration. Wait 15 minutes after detecting a LUN resize before making changes to a protection plan using that LUN. |
Performing “Set Active” on a Standby MaxRep Engine requires additional steps. |
After performing "Set Active" on a Standby MaxRep Engine, update the fabric agent services will need to be restarted. Please contact Oracle Pillar Customer Support (see Table 1 Contact information) for completion of these steps. |
If an Axiom is in discovery pending while upgrading from R1.00.xx, the Engine heartbeat to that Axiom fails to be enabled. |
Prior to upgrading from R1.00.xx, check the status of each Axiom using the following process:
|
The MySQL service on the Pillar Axiom Replication Engine may fail to start after a power on reset of both the Pillar Axiom system and the Replication Engine. |
In the event of performing a power on reset of the Pillar Axiom system and the Pillar Axiom Replication Engine:
|
Replication pairs that are in “Resync” status during upgrade from R1.00.xx may remain stuck in “Resync” status after the upgrade. |
This is due to the differences in how the data compare is performed during the Resync Phase I between R1.00.xx and R2.00.xx. This issue can be avoided by waiting for all protection plans to be in Differential sync status prior to upgrading from R1.00.xx. After performing an upgrade to R2.00.xx, verify that all protection plans are in Differential Sync. This can be done as follows:
|
Running a Retention LUN near full capacity may result in one or more protection plans remaining in Resync Phase II. |
A portion of each retention LUN is set aside to prevent data integrity issues. By default, this free space is 20 GB. If the amount of free space in the retention LUN crosses this threshold, pairs in protection plans using that retention LUN may stall in Resync Phase II. Several options are available to prevent this:
|
In certain conditions, converting an HBA port from an Initiator to a Target port may take longer than expected. |
Converting HBA ports from Initiator to Target ports is only performed during initial implementation. Wait for the conversion process to complete prior to configuring protection plans. |
When a virtual snapshot is exported to a Windows host, Dummy_LUN_ZERO SCSI disk drives are seen.
|
Dummy LUN Zero will be seen on the initiator which is zoned with MaxRep Appliance Target ports. The number of these Dummy LUNs depends on the number of AT ports zoned with the number of initiators on the host. These LUNs can be ignored. |
In a protection plan for 1-to-N configuration, the compression option is inactive. |
In case of protection plans for 1-to-N system configurations, the compression option is automatic and based on the following criteria: For primary scenario with synchronous pairs:
For any secondary scenario with synchronous pairs:
For any secondary scenario with asynchronous pairs:
|
If a source and target LUN in a replication pair need to be resized, a specific procedure must be followed to prevent the pair from going into a Resync Required status. |
If source and target LUNs must be resized, resize the target LUN first. Resizing the source LUN first might cause the replication to fail because the system will attempt to replicate the data beyond the capacity of the target LUN. To resize the replication pair use the following procedure:
The replication pair is successfully resized. |
Resync Required events are not forwarded to ARU by the Pillar Axiom storage array. |
This will be resolved with the Pillar Axiom Storage System Management (ASSM) release 05.03.02. Periodically check the Protection Health status from the Monitor tab in the Pillar Axiom MaxRep UI. |
Under certain conditions, the Pillar Axiom overall health status will indicate a Warning state even though all subcomponents are healthy. |
This can occur on large implementations under certain circumstances that may prevent the heartbeat from the Pillar Axiom Replication Engine from reaching the Pillar Axiom system in a timely fashion. Selecting Ctl-Alt-R from the Pillar Axiom UI will refresh the overall status. |
A number of issues, some previously undocumented, have been resolved in this release. Items that were documented as known issues in the previous release and are resolved with this release are described below. These items are no longer product issues.
Tip: For a complete listing of resolved issues, please contact Oracle Pillar Customer Support (see Table 1 Contact information).
Table 8 Resolved Pillar Axiom MaxRep Replication for SAN issues
Defect Number |
Description |
Bug 15878 |
Physical snapshots fail to get mounted. |
Bug 16127 |
Exchange 2007 consistency job failed. |
Bug 16131 |
The vSnap is not mounted even though specified the mount point during configuration. |
Bug 16296 |
VSS crashed. |
Bug 16628 |
Pair was deleted, but not reflected in Audit Log. |
Bug 17319 |
Bandwidth reports are not generated for primary appliance. |
Bug 17527 |
UI issues during Resync Step-II. |
Bug 17980 |
Incorrect warning message when deleting consistency jobs. |
Bug 18252 / PDS69874 |
Resync Step 1 in MaxRep dashboard shows Resync Data in Transit (MB) in the Step 2 display area of the GUI. |
Bug 18310 |
Oracle RAC- Reverse replication pairs struck at "Readiness check in Progress" state after Failover. |
Bug 18424 |
During installation, bonded IP's are not listed at "The following master/free NICs were detected as active on this system:". |
Bug 18499 |
ACG creation is not completed for iSCSI export after upgrade. |
Bug 18513 |
/home & /Retention are not being remounted as RW. |
Bug 18555 |
Some ports are showing "0.0.0.0" during any action through "Toolkit for MaxRep". |
Bug 18591 |
MaxRep toolkit is not listing the unused mapped and mounted LUNs for resize |
Bug 18618 |
Installation of MaxRep build is stuck at "Executing the mkinitrd image now...". |
Bug 18631 |
After Axiom rediscovery is modified, LUNs are not updated in UI. |
Bug 18637 |
Prepare target failed for reverse replication pair. |
Bug 18651 |
Axiom is not deregistered with the uninstalltion of secondary appliance. |
Bug 18652 |
Axiom deregister does not cleanup the nports table. |
Bug 18679 |
iSCSI log-out sessions need to be suppressed at the time of uninstallation. |
Bug 18778 |
Though resize is not detected by the Engine, it shows a successful status in "Policy History of Detect resize". |
Bug 18797 |
Un-installation exits on second pass if first pass was struck due to certain issues like N/W. |
Bug 18798 |
Health issues are not properly updated for multiple pairs in a single plan. |
Bug 18801 |
Consistency jobs are not created after upgrade. |
Bug 18803 |
Rollback stuck at "in progress". |
Bug 18812 |
Uninstall script aborting while CS is not reachable. |
Bug 18856 |
Replication not progressing, data protection not being launched. |
Bug 18892 |
Replication Engine Ports Configuration is not allowed to configure ISCSI-iqn port after upgrade. |
Bug 18941 |
Fabric agent crashed after source Appliance failover. |
Bug 19064 |
Corrections required to read-only-mount.log. |
Bug 19067 |
Resync set to YES for Async Pairs after upgrade. |
Bug 19082 |
Same Vsnap export which is exported over iSCSI or FC. |
Bug 19104 |
iSCSI Login and Replication Appliance Heart Beat policies should be inserted for already registered Axioms during upgrade. |
Bug 19106 |
Pairs are not inserted after prepare target passed. |
Bug 19112 |
Mapped LUNs are not listed under "unmap" list. |
Bug 19141 |
While taking the physical snapshot "/var/crash" is listed under suggested drives. |
Bug 19144 |
After Upgrade, 'readonlychecksd' service is not running. |
Bug 19223 |
User should not be allowed to export the vsnap to different initiators at the same time (both over FC and iSCSI). |
Bug 19249 |
Ports disappeared after moving the port to ai_for_target table from ai port table. |
Bug 19251 |
While mapping the LUNs from Toolkit and not providing a mountpoint, it will take mountpoint of another LUN. |
Bug 19254 |
After upgrade, 'port type' entry is empty for some FC ports. |
Bug 19256 |
Replication engine entries are still displayed in axiom UI after uninstallation of MaxRep engine. |
Bug 19257 |
Label change to "Delete Rreplication" from "Stop Replication". |
Bug 19266 |
Resync operation not performed after source LUN resizing given restart resync. |
Bug 19278 |
Cluster ownership not taken by Active node. |
Bug 19280 |
/home unmounted after Axiom reboot. |
Bug 19332 |
Remove "Fabric name" field in recovery page. |
Bug 19353 |
vSnap related pop ups should be specific. |
Bug 19375 |
Axiom Derigister operation does not update the "registration status". |
Bug 19390 |
readonlychecksd is not getting started after upgrading the engine. |
Bug 19406 |
Remove "mount point is not specified" message from UI in recovery page. |
Bug 19416 |
Not able to collect the logs using either CLI or UI. |
Bug 19505 / PDS71307 |
After Active node failover on MaxRep HA appliance setup, the retention LUNs are not being mounted to either replication engine. Replication is stuck in "LUN Reconfiguration Pending" state. |
Bug 19585 |
Certain snapshots not being listed under "Monitor scheduled snapshots" column. |
Bug 19609 |
Vsnaps are not being unexported (unexport operation is not honored). |
Bug 19622 |
"Custom Bandwidth" report redirects to Network traffic rates when "Complete Host Report" option is selected. |
Bug 19623 |
+vCon Resync required set to YES for the Failback Protections. |
Bug 19649 |
VX Services need to be started if upgrade fails |
Bug 19651 |
Upgrade failed due to vsnaps exported over iSCSI. |
Bug 19652 |
Upgrade failure occurred during starting of the iSCSI target service. |
Bug 19653 |
Exported vsnaps are deleted from the UI after upgrade. |
Bug 19654 |
After Upgrade, iSCSI iqn ports are not reported. |
Bug 19656 |
While moving Cursor over LUN, cursor tip format is displayed wrong during configuring replication pairs. |
Bug 19657 |
For certain imported ACGs, when clicked on "view", "Access Control Ports" is showing empty. |
Bug 19659 |
Axiom Agent upgrade is prompting mandatory rebooted in order to complete upgrade. |
Bug 19660 |
vSnaps exported over iscsi wer unexported after upgrade. |
Bug 19662 |
Pairs struck at "Configuring Lun protection" state. |
Bug 19662 |
Pairs stuck at "Configuring LUN protection" state. |
Bug 19666 |
Protection direction showing extra "()" [ "MAXREPCOS57 >MAXREP COS139()"] on 'Activation status' page. |
Bug 19667 |
Agent not responding after activating the remote Rep Engine. |
Bug 19668 |
Recovery vsnapshot struck at 0% after activating remote rep engine. |
Bug 19669 |
After upgrade, all pairs are stuck at "Reconfiguring LUN upon Reboot/Failover/Resize" state. |
Bug 19673 |
NIC configuration does not update the new iqn & IP address for AT port. |
Bug 19695 |
Pairs which are deactivated and activated again after upgrade are stuck at Resync I. |
Bug 19696 |
Resync flag set to Yes after restart resync for LUN resize. |
Bug 19699 |
Time interval of Fx job configuration does not match consistency policy. |
Bug 19720 |
After rebooting the secondary PS, all of the pairs with regards to secondary PS stuck at "Reconfiguring LUN upon Reboot/Failover/Resize". |
Bug 19723 |
RPO is not increasing if pairs are stuck at resync-II with data changes from resync-I. |
Bug 19726 |
File system is not extended for the Home & Retention LUNs when detect resize performed from the Toolkit. |
Bug 19742 / PDS71450 |
After upgrading MaxRep Engine, GUI issue seen. |
Bug 19743 / PDS71451 |
"iSCSI Port Registration Failed" alert is incorrect. |
Bug 19744 / PDS71452 |
iSCSI displayed incorrectly in MaxRep GUI. |
Bug 19746 / PDS71456 |
After deselecting the export checkbox, still able to export the backup scenario. |
Bug 19747 / PDS71455 |
Not able to run backup scenario after upgrade. |
Bug 19760 / PDS71331 |
Axioms attached as part of MaxRep HA setup are showing intermittent heartbeat loss. |
Bug 19770 / PDS71477 |
"Hourly data rate changes" graph not displayed after performing upgrade. |
Bug 19771 / PDS71483 |
Incorrect warning message after passing duplicate Trap Listener IP. |
Bug 19773 / PDS71484 |
Update mismatch alert seen when MaxRep engines were upgraded. |
Bug 19786 |
Consistency policy configuration page is not opening after click on "Add consistency" button. |
Bug 19792 |
Errors in sparse policy display in Review page of protection plan. |
Bug 19794 |
Wrong pop-up message displayed after configuring retention policy. |
Bug 19795 |
Able to activate consistency policy even without replication pairs. |
Bug 19798 |
Need to improve formatting of rows/columns in the review page of plan with sparse protection. |
Bug 19800 |
Protection type is not consistent across pages. |
Bug 19803 |
DB sync jobs are not triggered once it fails with exit code '23'. |
Bug 19805 |
Consistency policy names are not listed in the monitor page after reactivating the consistency policies. |
Bug 19808 / PDS71500 |
Data log Path in vsnap creation is displayed incorrectly. |
Bug 19810 / PDS71505 |
Audit logs are stating an Axiom is registered after Axiom registration failed. |
Bug 19819 / PDS71542 |
Replication Options values remained unchanged after upgrade. |
Bug 19837 / PDS71548 |
During deactivation of plan protection, status showing 122. |
Bug 19838 / PDS71552 |
Rollback stuck in Target under Rollback. |
Bug 19839 / PDS71553 |
Reporting issues seen after making Remote engine active. |
Bug 19840 / PDS71557 |
MaxRep engine heartbeat no longer occurs after releasing and then re-applying MaxRep license on either the secondary or the primary Axioms. Heartbeat never returns. |
Bug 19841 / PDS71551 |
After performing deactivation & activation multiple times, pairs stuck in Configuration LUN protection. |
Bug 19850 |
Used Retention LUNs are listed under Unmap list. |
Bug 19867 / PDS71565 |
Add button for adding initiator iSCSI port for ACG creation does not work in Internet Explorer |
Bug 19868 / PDS71572 |
Engine showing both the Nodes active when one node is in shutdown state. |
Bug 19893 |
Incorrect port name is shown in SanHost ACG. |
Bug 19900 / PDS71579 |
After Deregistered Axioms from the MaxRep HA cluster, Engine still seen in the Axiom. |
Bug 19902 / PDS71609 |
Replication pairs are stuck at 0% of Resync I following power cycle of host and engine. |
Bug 19903 / PDS71611 |
Plan stuck in "replication pending". |
Bug 19922 |
Old toolkit map Show History is being overwritten. |
Bug 19929 / PDS71632 |
"Cumulative retention space usage" graph is blank. |
Bug 19930 / PDS71634 |
"Protection Differential sync reached" alerts showing as warning. |
Bug 19931 |
Agent Heartbeat showing as red even though services are running fine. |
Bug 19940 / PDS71645 |
At GUI page "Protect -> Manage Protected Disks/Volumes" the box title color for "Cleanup Replication Options" should match the tab color for "Protect". |
Bug 19965 |
Export snapshot table name showing as "Export snapshot as FC" instead of "Export snapshot as FC/iSCSI". |
Bug 19969 / PDS71661 |
No MaxRep errors occur with Axiom powered down - Source LUN should be shown to be inaccessible. |
Bug 19970 / PDS71669 |
Call-home logs are no longer being generated for the RPO threshold exceeded alert. |
Bug 19988 |
Converting ports stuck at Transient Pending. |
Bug 20030 / PDS71716 |
Mounted Retention LUNs are not getting resized from MaxRep Toolkit. |
Bug 20031 / PDS71727 |
Export failed when the ACG information listed the Access Control Ports with capital letters. |
Bug 20044 |
LUNs which are used as target LUNs are listed under lunmap list. |
Bug 20054 / PDS71748 |
Unmap stuck in pending. |
Bug 20080 |
Introduction of trap time interval in the SNMP trap settings. |
Bug 20406 / PDS71976 |
Axiom goes to warning state due to heartbeat loss when 100 pairs resync simultaneously. |
Bug 20506 / PDS72018 |
Axiom registration is failing. |
Bug 20509 / PDS72060 |
Pairs not preceding & stuck in configuration LUN protection. |
Bug 20589 |
Spelling mistake in log file name collected from UI. |
Bug 20719 |
App service utilized 100% of the CPU. |
Bug 20764 |
Resync flag is not being set to yes even though protected source LUN was deleted from the Axiom. |
Bug 20765 |
Log rotation is not happening for few of the logs. |
Bug 20901 |
Resync flag set to "yes" after changing the source LUN name. |
Bug 20922
|
False alert messages indicating low free space are shown in host logs due to stale files in retention folder. |
Bug 20993 |
Not getting Traps for CS Node Failover. |
PDS68024 |
Target port conversion stuck in Transient Pending. |
PDS68299 |
DB sync job fails after Axiom NDU due to /home being read only. |
PDS68880 |
UI doesn’t update the latest rollback success status. |
PDS69236 |
Update the online help for unsupported traps. |
PDS69298 |
Uninstallation was successful however the re-installation of the license caused continued call home logs to be created stating license file was missing. |
PDS69527 |
"No volume available on secondary server for storing retention logs" warning message seen for new pairs after starting 200 pairs successfully. |
PDS69638 |
Pairs stuck in [Process Service/Target cleanup pending] after primary engine reboot. |
PDS69702 |
Able to delete the registered axiom while pairs were progressing. |
PDS69782 |
In HA setup, export options are given in Rollback Scenario. |
PDS69890 |
GUI response time is very slow. |
PDS69918 |
Getting error while creating consistency tag for oracle LUNs. |
PDS69943 |
"HA_DB_SYNC" - "Table './svsdb1/frbStatus' is marked as crashed and should be repaired". |
PDS70221 |
MaxRep Dashboard is displaying the same Rollback Scenario Status twice on the "Manage Backup/Rollback Scenarios". |
PDS70298 |
"EXT3-fs warning (device dm-37): ext3_dx_add_entry: Directory index full!". |
PDS70299 |
On a Protection Plan, if a source LUN is removed, the GUI will display a gap with the remaining associations and LUN sizes are wrong. |
PDS70369 |
Cannot choose a target LUN with MaxRep on Asynch/IP protocol. |
PDS70459 |
Data held on the target cache & engine pushing the data on the Target LUNs slowly. |
PDS70506 |
Sync pairs stuck in Resyncing (Step II). |
PDS70556 |
Installation does not show the creation of eth3 file in ifaces directory. |
PDS70559 |
When configuring iSCSI NICs during installation, eth3 is assigned as the default instead of eth0. |
PDS70581 |
Users are unable to un-do converted ports if HBA port configuration is created incorrectly. |
PDS70595 |
If registering an axiom and using an incorrect IP address, the MaxRep engine does not allow user to halt registration and MaxRep engine takes an excessive amount of time before the Axiom registration fails. |
PDS70627 |
"Create Recovery Snapshots" page no longer displays items that were previously displayed correctly. |
PDS70702 |
GUI displays an "iSCSI Port Registration" alert for Axiom even though Axiom doesn't have any iSCSI ports. |
PDS70822 |
MaxRep: Unable to start mysqld service after moving /home to a storage LUN. |
PDS70840 |
MaxRep issue - SAS Controller not initialized during POST after power issue. |
PDS70852 |
Multiple defects related to pausing / stopping host based replication pairs. |
PDS70995 |
"Settings -> Axioms -> Toolkit for MaxRep -> Policy History for LUN Mapping", an incorrect request to map to devices still shows "In Progress" and never fails. |
For items in this section that refer to inserting and/or removing field replaceable units (FRUs), please refer to the Pillar Axiom Service Guide for more information.
For items in this section that refer to provisioning or configuring a Pillar Axiom system, please refer to the Pillar Axiom Administrator’s Guide for more information.
The following sections describe topics in the technical documentation that were not able to be corrected in time for the current release of the Pillar Axiom MaxRep Replication for SAN.
On page 69, delete step 3.
On page 69, change step 5 to the following:
Select the IP address of the Replication Engine that will serve as the Control Service Replication Engine for this Pillar Axiom system.
(Internet Explorer and Firefox) The left navigation pane width is fixed and cannot be adjusted to view the text of long text entries.
(Firefox) Search results text highlighting is disabled.
2 Throughout this document, all references to release 2.x apply to the Pillar Axiom MaxRep Replication for SAN product.