This section provides information related to new features, functionality, and supported products in Sun Cluster 3.1 10/03 software.
The Cluster Reconfiguration Notification Protocol (CRNP) provides a mechanism for applications to register for, and receive subsequent asynchronous notification of Sun Cluster reconfiguration events. Both data services running on the cluster and applications running outside of the cluster can register for event notification. Notifications include changes in cluster membership, resource groups, and resource state.
Disk-Path Monitoring (DPM) informs system administrators of diskpath failures for both primary and secondary paths. Disk path failure detection mechanism generates an event through the Cluster Event Framework and allows manual intervention.
This feature stripes IP traffic sent to the per-node logical IP addresses across all private interconnects. TCP traffic is striped on a per connection granularity. UDP traffic is striped on a per packet basis. .
The integration of Sun's eRAS knowledge engine with the sccheck(1M), utility greatly increases the power of sccheck to detect “vulnerable” configurations by leveraging many existing eRAS checks. Vulnerability reports are produced fro both individual nodes as well as the cluster.
This feature enables the use of Role Based Access Control (RBAC) for cluster administration and operation.
This feature extends Sun Cluster functionality to support single-node clusters.
This feature enables developers to use Sun ONE Studio's development environment to create agents.
This feature enhances scinstall(1M) to install all nodes of a new cluster from a single point of control. Additionally, it provides compatibility with the Solaris Web Start installation tool.
Localized Sun Cluster components are now available in five languages and can be installed using the Web Start program. For more information, see Sun Cluster 3.1 10/03 Software Installation Guide.
Language |
Localized Sun Cluster Component |
---|---|
French |
Installation Cluster Control Panel (CCP) Sun Cluster software Sun Cluster data services Sun Cluster module for Sun Management Center SunPlex Manager Sun Cluster data services |
Japanese |
Installation Cluster Control Panel (CCP) Sun Cluster software Sun Cluster data services Sun Cluster module for Sun Management Center SunPlex Manager Sun Cluster man pages Cluster Control Panel man pages Sun Cluster data services man pages |
Simplified Chinese |
Installation Cluster Control Panel (CCP) Sun Cluster software Sun Cluster data services Sun Cluster module for Sun Management Center SunPlex Manager |
Traditional Chinese |
Installation Cluster Control Panel (CCP) Sun Cluster software Sun Cluster data services Sun Cluster module for Sun Management Center SunPlex Manager |
Korean |
Installation Cluster Control Panel (CCP) Sun Cluster software Sun Cluster data services Sun Cluster module for Sun Management Center SunPlex Manager |
For information on data services enhancements, see Sun Cluster 3.1 Data Services 10/03 Release Notes.
This section describes the supported software and memory requirements for Sun Cluster 3.1 10/03 software.
Operating environment and patches – Supported Solaris versions and patches are available at the following URL:
For more details, see Patches and Required Firmware Levels.
Volume managers
On Solaris 8 – Solstice DiskSuiteTM 4.2.1 and VERITAS Volume Manager 3.2 and 3.5.
On Solaris 9 – Solaris Volume Manager and VERITAS Volume Manager 3.5.
If you are upgrading from VERITAS Volume Manager (VxVM) 3.2 to 3.5, the Cluster Volume Manger (CVM) feature will not be available until you install the CVM license key for version 3.5. In VxVM 3.5, the CVM license key for version 3.2 does not enable CVM and must be upgraded to the CVM license key for version 3.5.
File systems –
On Solaris 8 – Solaris UFS and VERITAS File System 3.4 and 3.5.
On Solaris 9 – Solaris UFS and VERITAS File System 3.5.
Data services (agents) – For information on supported data services, see Sun Cluster 3.1 Data Services 10/03 Release Notes.
Sun Cluster 3.0 data services can run on Sun Cluster 3.1 10/03 software, except as noted in Running Sun Cluster HA for Oracle 3.0 on Sun Cluster 3.1 10/03 Software.
Memory Requirements – Sun Cluster 3.1 10/03 software requires extra memory beyond what is configured for a node under a normal workload. The extra memory equals 128 Mbytes plus ten percent. For example, if a standalone node normally requires 1 Gbyte of memory, you need an extra 256 Mbytes to meet memory requirements.
RSMAPI –Sun Cluster 3.1 10/03 software supports the Remote Shared Memory Application Programming Interface (RSMAPI) on RSM-capable interconnects, such as PCI-SCI.
The following restrictions apply to the Sun Cluster 3.1 10/03 release:
For other known problems or restrictions, see Known Issues and Bugs.
Multihost tape, CD-ROM, and DVD-ROM are not supported.
Alternate Pathing (AP) is not supported.
Storage devices with more than a single path from a given cluster node to the enclosure are not supported except on the following storage devices:
Sun StorEdgeTM A3500, for which two paths are supported to each of two nodes
Any device that supports Sun StorEdge Traffic Manager
EMC storage devices that use EMC PowerPath software
If you are using a Sun EnterpriseTM 420R server with a PCI card in slot J4701, the motherboard must be at dash-level 15 or higher (501-5168-15 or higher). To find the motherboard part number and revision level, look at the edge of the board closest to PCI slot 1.
System panics have been observed in clusters when UDWIS I/O cards are used in slot 0 of a board in a Sun Enterprise 10000 server; do not install UDWIS I/O cards in slot 0 of a board in this server.
When you increase or decrease the number of node attachments to a quorum device, the quorum vote count is not automatically recalculated. You can reestablish the correct quorum vote if you remove all quorum devices and then add them back into the configuration.
SunVTSTM is not supported.
IPv6 is not supported.
Remote Shared Memory (RSM) transport types are mentioned in the documentation, but are not supported. If you use the RSMAPI, specify dlpi as the transport type.
The SBus Scalable Coherent Interface (SCI) is not supported as a cluster interconnect. However, the PCI-SCI interface is supported.
Logical network interfaces are reserved for use by Sun Cluster software.
Client applications that run on cluster nodes should not map to logical IP addresses of an HA data service. During failover, these logical IP addresses might go away, leaving the client without a connection.
If you are upgrading from VERITAS Volume Manager (VxVM) 3.2 to 3.5, the Cluster Volume Manger (CVM) feature will not be available until you install the CVM license key for version 3.5. In VxVM 3.5, the CVM license key for version 3.2 does not enable CVM and must be upgraded to the CVM license key for version 3.5.
In Solstice DiskSuite/Solaris Volume Manager configurations that use mediators, the number of mediator hosts configured for a diskset must be exactly two.
DiskSuite Tool (Solstice DiskSuite metatool) and the Enhanced Storage module of Solaris Management Console (Solaris Volume Manager) are not compatible with Sun Cluster 3.1 10/03 software.
With VxVM 3.2 or later, Dynamic Multipathing (DMP) cannot be disabled with the scvxinstall command during VxVM installation. This procedure is described in the chapter,“Installing and Configuring VERITAS Volume Manager” in Sun Cluster 3.1 10/03 Software Installation Guide. The use of Veritas Dynamic Multipathing is supported in the following configurations.
A single I/O path per node to the cluster's shared storage.
A supported multipathing solution (Sun Traffic Manager, EMC PowerPath, Hiatchi HDLM) that manages multiple I/O paths per node to the shared cluster storage.
Simple root disk groups (rootdg created on a single slice of the root disk) are not supported as disk types with VxVM on Sun Cluster 3.1 10/03 software.
Software RAID 5 is not supported.
Quotas are not supported on cluster file systems.
Sun Cluster 3.1 10/03 software does not support the use of the loopback file system (LOFS) on cluster nodes.
The command umount -f behaves in the same manner as the umount command without the -f option. It does not support forced unmounts.
The command unlink(1M) is not supported on non-empty directories.
The command lockfs -d is not supported. Use lockfs -n as a workaround.
The cluster file system does not support any of the file-system features of Solaris software by which one would put a communication end-point in the file-system name space. Therefore, although you can create a UNIX domain socket whose name is a path name into the cluster file system, the socket would not survive a node failover. In addition, any fifos or named pipes you create on a cluster file system would not be globally accessible, nor should you attempt to use fattach from any node other than the local node.
It is not supported to execute binaries off cluster file systems that are mounted by using the forcedirectio mount option.
You cannot remount a cluster file system with the directio mount option added at remount time.
You cannot set the directio mount option on a single file by using the directio ioctl.
The following VxFS features are not supported in a Sun Cluster 3.1 10/03 configuration.
Quick I/O
Snapshots
Storage checkpoints
Cache advisories (these can be used, but the effect will be observed on the given node only)
VERITAS CFS (requires VERITAS cluster feature and VCS)
All other VxFS features and options that are supported in a cluster configuration are supported by Sun Cluster 3.1 10/03 software. See VxFS documentation and man pages for details about VxFS options that are or are not supported in a cluster configuration.
The following VxFS-specific mount options are not supported in a Sun Cluster 3.1 10/03 configuration.
convosync (Convert O_SYNC)
mincache
qlog, delaylog, tmplog
For information about administering VxFS cluster file systems in a Sun Cluster configuration, see“Administering Cluster File Systems Overview” in Sun Cluster 3.1 10/03 System Administration Guide.
This section identifies any restrictions on using IP Network Multipathing that apply only in a Sun Cluster 3.1 10/03 environment, or are different than information provided in the Solaris documentation for IP Network Multipathing.
IPv6 is not supported.
All public network adapters must be in IP Network Multipathing groups.
In the /etc/default/mpathd file, do not change TRACK_INTERFACES_ONLY_WITH_GROUPS from yes to no.
Most procedures, guidelines, and restrictions that are identified in the Solaris documentation for IP Network Multipathing are the same in a cluster or a noncluster environment. Therefore, see the appropriate Solaris document for additional information about IP Network Multipathing restrictions.
Operating Environment Release |
For Instructions, Go To... |
---|---|
Solaris 8 operating environment |
IP Network Multipathing Administration Guide |
Solaris 9 operating environment |
“IP Network Multipathing Topics” in System Administration Guide: IP Series |
Do not configure cluster nodes as routers (gateways). If the system goes down, the clients cannot find an alternate router and cannot recover.
Do not configure cluster nodes as NIS or NIS+ servers. However, cluster nodes can be NIS or NIS+ clients.
Do not use a Sun Cluster configuration to provide a highly available boot or installation service on client systems.
Do not use a Sun Cluster configuration to provide an rarpd service.
If you install an RPC service on the cluster, the service must not use the following program numbers: 100141, 100142, and 100248. These numbers are reserved for the Sun Cluster daemons rgmd_receptionist, fed, and pmfd, respectively. If the RPC service you install also uses one of these program numbers, you must change that RPC service to use a different program number.
Currently, Sun StorEdge Network Data Replicator (SNDR) can only be used with HAStorage. This restriction only applies to the light weight resource group that includes the logical host that SNDR is using for replication. Application resource groups can still use HAStoragePlus with SNDR. You can use failover file system with HAStoragePlus and SNDR by using HAStorage for the SNDR resource group, and HAStoragePlus for the application resource group, where the HAStorage and HAStoragePlus resources point at the same underlying DCS device. A patch is being developed to enable SNDR to work with HAStoragePlus.
Running high-priority process scheduling classes on cluster nodes is not supported. Processes that run in the time-sharing scheduling class with a high priority, or processes that run in the real-time scheduling class should not be run on cluster nodes. Sun Cluster software relies on kernel threads that do not run in the real-time scheduling class. Other time-sharing processes that run at higher-than-normal priority or real-time processes can prevent the Sun Cluster kernel threads from acquiring needed CPU cycles.
Sun Cluster 3.1 10/03 software can only provide service for those data services that are either supplied with the Sun Cluster product or set up with the Sun Cluster data services API.
Sun Cluster software currently does not have an HA data service for the sendmail(1M) subsystem. The sendmail subsystem can run on the individual cluster nodes, but the sendmail functionality will not be highly available, including the functionality of mail delivery and mail routing, queuing, or retry.
For information about restrictions for specific data services, see Sun Cluster 3.1 Data Services 10/03 Release Notes.
The Sun Cluster HA for Oracle 3.0 data service can run on Sun Cluster 3.1 10/03 software only when used with the following versions of the Solaris operating environment:
Solaris 8, 32-bit version
Solaris 8, 64-bit version
Solaris 9, 32-bit version
The Sun Cluster HA for Oracle 3.0 data service cannot run on Sun Cluster 3.1 10/03 software when used with the 64-bit version of Solaris 9.