The following restrictions and requirements have been added or updated since the Sun Cluster 3.0 U1 release.
If you are using a Sun Enterprise 420R server with a PCI-card in slot J4701, the motherboard must be at dash-level 15 or higher (501-5168-15 or higher). To find the motherboard part number and revision level, look at the edge of the board closest to PCI slot 1.
If you use Enclosure-Based Naming of devices (a feature introduced in VxVM version 3.2), ensure that you use consistent device names on all cluster nodes that share the same storage. VxVM does not coordinate these names, so it is up to the administrator to make sure that VxVM assigns the same names to the same devices from different nodes. While failure to assign consistent names will not interfere with correct cluster behavior, it will greatly complicate cluster administration and greatly increase the possibility of configuration errors, potentially leading to loss of data.
If you use VERITAS Volume Manager (VxVM) 3.1.1 or higher, the path to the man pages has changed from /opt/VRTSvxvm/man to /opt/VRTS/man.
Sun SNDR 3.0 and Sun StorEdge Instant Image 3.0 are not supported with the Sun Cluster 3.0 general release. They are supported only with Sun Cluster 3.0 Update 1 and compatible versions.
The Sun StorEdge 3.0 services software includes Sun StorEdge Network Data Replicator (Sun SNDR) 3.0, Sun StorEdge Instant Image 3.0, and Sun StorEdge Fast Write Cache 3.0. Following are configuration restrictions and requirements that are unique to Sun Cluster 3.0 Update 1 when you run with Sun StorEdge 3.0 services software.
More information about these configuration restrictions and requirements can be found in the Sun Cluster 3.0 U1 and Sun StorEdge Software 3.0 Integration Guide. Be sure to read the integration guide before you install and administer Sun SNDR 3.0 or Sun StorEdge Instant Image 3.0 with Sun Cluster software.
Hardware Configuration Restrictions and Requirements
The Sun Cluster 3.0 Update 1 and Sun StorEdge 3.0 services software with patches are supported in a two-node cluster environment only.
You cannot use the Sun StorEdge Fast Write Cache (FWC) product (all versions, including the SUNWnvm Version 3.0 software) in any Sun Cluster environment because cached data is inaccessible from other machines in a cluster. To compensate, you can use a caching array such as the Sun StorEdge A3500 disk array.
Software Installation, Configuration, and Administration Restrictions and Requirements
Sun StorEdge 3.0 services software is supported as a cluster-aware product only with Sun Cluster 3.0 Update 1 and compatible versions.
With the Sun StorEdge Instant Image and Sun SNDR software, each instance of a volume set (master, shadow, bitmap, overflow) and replication set (primary or secondary bitmap) must be in the same disk device group. This allows for all constituent volumes in an instance to fail over and switch back in their entirety. Within a Sun Cluster environment there can be more than one disk device group, that contains instances of Sun StorEdge Instant Image and Sun SNDR constituent volumes.
The resource group created for Sun StorEdge services software is a lightweight resource group that contains only the associated disk device group and a logical failover host. Do not add additional resources to this resource group or the StorEdge services software might not properly fail over or switch back.
The resource group name you specify must consist of the disk device group name appended with -stor-rg.
When you add the SUNW.HAStorage resource to the resource group, be sure the AffinityOn property is set to True.
When you install Sun StorEdge 3.0 services software on a cluster, you must enter the first cluster node's configuration volume location for the cluster when you install on the second cluster node. You will get a warning message that the configuration is already initialized. You can ignore this message.
When you shut down and restart cluster nodes, your system might experience a panic condition on the node you are restarting. The node panic is expected behavior in the cluster and is part of the cluster software's failfast mechanism. The Sun Cluster 3.0 U1 Concepts manual describes this mechanism.
Only one system administrator or root user at a time should create and configure Sun StorEdge volume sets from a single server or node, to avoid corrupting the Sun StorEdge services configuration.
Two administrators should not write to the Sun StorEdge services configuration at the same time. The operations that access the configuration include, but are not limited to, the following list.
- Creating and deleting volume sets
- Adding and removing volume sets from I/O groups
- Assigning new bitmap volumes to a volume set
- Updating the disk device group or resource name
- Any operation that changes the Sun StorEdge services and related volume set configuration
After you synchronize the primary and secondary Sun SNDR volumes, a directory named ._ (period underscore) is created by the cluster file system. You can ignore or delete this directory. When you unmount the cluster file system the directory disappears.
In addition to the Sun Cluster commands, you can use the command options C tag and -C tag with the sndradm and iiadm commands to control the Sun StorEdge services in a Sun Cluster environment. These are described in the Sun SNDR and Sun StorEdge Instant Image system administration documentation.
Identify requirements for all data services before you begin Solaris and Sun Cluster installation. Failure to do so might result in installation errors that require that you completely reinstall the Solaris and Sun Cluster software.
For example, the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters has special requirements for the hostnames/node names that you use in the cluster. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software. For more information on the special requirements for the hostnames/node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.