This section provides information related to new features, functionality, and supported products in Sun Cluster 3.1.
Sun Cluster Security Hardening uses the Solaris Operating Environment hardening techniques recommended by the Sun BluePrintsTM program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of Sun Cluster Security Hardening.
The Sun Cluster Security Hardening documentation is available at http://www.sun.com/blueprints/0203/817-1079.pdf. You can also access the article from http://wwws.sun.com/software/security/blueprints. From this URL, scroll down to the Architecture heading to locate the article “Securing the Sun Cluster 3.x Software.” The documentation describes how to secure Sun Cluster 3.1 deployments in a Solaris 8 and Solaris 9 environment. The description includes the use of the Solaris Security Toolkit and other best-practice security techniques recommended by Sun security experts.
Sun Cluster 3.1 software now supports open topologies. You are no longer limited to the storage topologies listed in the Sun Cluster 3.1 Concepts document.
Use the following guidelines to configure your cluster.
Sun Cluster supports a maximum of eight nodes in a cluster, regardless of the storage configurations that you implement.
A shared storage device can connect to as many nodes as the storage device supports.
Shared storage devices do not need to connect to all nodes of the cluster. However, these storage devices must connect to at least two nodes.
Sun Cluster 3.1 now supports greater than three-node cluster configurations without shared storage devices. Two-node clusters are still required to have a shared storage device to maintain quorum. This storage device does not need to perform any other function.
Data services may now be configured to launch under a Solaris project name when brought online using the RGM—See “Data Service Project Configuration” section in “Key Concepts – Administration and Application Development” in Sun Cluster 3.1 Concepts Guide for detailed information about planning project configuration for your data service.
For more information on the support for the Solaris implementation of IP network multipathing on public networks, see“Planning the Sun Cluster Configuration” in Sun Cluster 3.1 Software Installation Guide and “Administering the Public Network” in Sun Cluster 3.1 System Administration Guide.
For more information on how to set a desired number of secondary nodes for a disk device group, see “Administering Disk Device Groups” in Sun Cluster 3.1 System Administration Guide (refer to the procedures for Setting the Desired Number of Secondaries and Changing Disk Device Group Properties). Additional information can also be found in “Cluster Administration and Application Development” in Sun Cluster 3.1 Concepts Guide (See the section on Multi-Ported Disk Failover).
For information on data services enhancements, see “What's New in Sun Cluster 3.1 Data Services 5/03” in Sun Cluster 3.1 Data Service 5/03 Release Notes.
This section describes the supported software and memory requirements for Sun Cluster 3.1 software.
Operating environment and patches – Supported Solaris versions and patches are available at the following URL:
For more details, see Patches and Required Firmware Levels.
Volume managers
On Solaris 8 – Solstice DiskSuiteTM 4.2.1 and VERITAS Volume Manager 3.2 and 3.5.
On Solaris 9 – Solaris Volume Manager and VERITAS Volume Manager 3.5.
If you are upgrading from VERITAS Volume Manager (VxVM) 3.2 to 3.5, the Cluster Volume Manger (CVM) feature will not be available until you install the CVM license key for version 3.5. In VxVM 3.5, the CVM license key for version 3.2 does not enable CVM and must be upgraded to the CVM license key for version 3.5.
File systems –
On Solaris 8 – Solaris UFS and VERITAS File System 3.4 and 3.5.
On Solaris 9 – Solaris UFS and VERITAS File System 3.5.
Data services (agents) – For information on supported data services, see Sun Cluster 3.1 Data Service 5/03 Release Notes.
Sun Cluster 3.0 data services can run on Sun Cluster 3.1, except as noted in Running Sun Cluster HA for Oracle 3.0 on Sun Cluster 3.1.
Memory Requirements – Sun Cluster 3.1 software requires extra memory beyond what is configured for a node under a normal workload. The extra memory equals 128 Mbytes plus ten percent. For example, if a standalone node normally requires 1 Gbyte of memory, you need an extra 256 Mbytes to meet memory requirements.
RSMAPI –Sun Cluster 3.1 software supports the Remote Shared Memory Application Programming Interface (RSMAPI) on RSM-capable interconnects, such as PCI-SCI.
The following restrictions apply to the Sun Cluster 3.1 release:
svc_default_stksize and lwp_default_stksize parameters –Set the rpcmod:svc_default_stksize parameter to 0x8000 and the lwp_default_stksize parameter to 0x6000 in the /etc/system file, to avoid stack overflow.
If any VxFS package or patch is added, make sure that the settings for these parameters in the /etc/system file match the values shown above.
local-mac-address? variable – The local-mac-address? variable must have a value of true for Ethernet adapters. This is a reversal of the Sun Cluster 3.0 software requirement, which was to set this variable to a value of false.
Remote Shared Memory (RSM) transport types – These transport types are mentioned in the documentation, but are not supported. If you use the RSMAPI, specify dlpi as the transport type.
Scalable Coherent Interface (SCI) – The SBus SCI interface is not supported as a cluster interconnect. However, the PCI-SCI interface is supported.
Logical network interfaces – These interfaces are reserved for use by Sun Cluster 3.1 software.
Disk path monitoring – Only active disk paths are monitored on the current primary node for failures by Sun Cluster software. You must monitor disk paths manually to avoid double failures or loss of path to a quorum device.
Storage devices with more than two physical paths to the enclosure – More than two paths are not supported except on the following: Sun StorEdgeTM A3500, for which two paths are supported to each of two nodes, any device that supports Sun StorEdge Traffic Manager, and EMC storage devices that use EMC PowerPath software.
SunVTSTM – Not supported.
Multihost tape, CD-ROM, and DVD-ROM – Not supported.
Loopback File System – Sun Cluster 3.1 software does not support the use of the loopback file system (LOFS) on cluster nodes.
Running client applications on the cluster nodes – Client applications that run on cluster nodes should not map to logical IP addresses of an HA data service. During failover, these logical IP addresses might go away, leaving the client without a connection.
Running high-priority process scheduling classes on cluster nodes – Not supported. Processes that run in the time-sharing scheduling class with a high priority, or processes that run in the real-time scheduling class should not be run on cluster nodes. Sun Cluster 3.1 software relies on kernel threads that do not run in the real-time scheduling class. Other time-sharing processes that run at higher-than-normal priority or real-time processes can prevent the Sun Cluster kernel threads from acquiring needed CPU cycles.
Upgrade from Solaris 8 to Solaris 9 - Upgrade from Solaris 8 to Solaris 9 software on a Sun Cluster configuration is not supported. You can only upgrade to subsequent, compatible versions of the Solaris 8 operating environment. To run Sun Cluster 3.1 software on the Solaris 9 operating environment, you must perform a new installation of the Solaris 9 version of Sun Cluster 3.1 software after the nodes are installed with Solaris 9 software.
IPv6 - Not supported.
SNDR cannot be used with HAStoragePlus - Currently, SNDR can only be used with HAStorage. This restriction only applies to the light weight resource group that includes the logical host SNDR is using for replication. Application resource groups can still use HAStoragePlus with SNDR. You can use failover filesystem with HAStoragePlus and SNDR by using HAStorage for the SNDR resource group, and HAStoragePlus for the application resource group, where the HAStorage and HAStoragePlus resources point at the same underlying DCS device. A patch is being developed to enable SNDR to work with HAStoragePlus.
Mounting options - (1) You cannot remount a file system with the directio mount option added at remount time, and (2) you cannot set the directio mount option on a single file by using the directio ioctl.
License key - The license key can only be installed with the interactive form or with the scvxinstall -e option.
Other restrictions - For other known problems or restrictions, see Known Issues and Bugs.
Sun Cluster 3.1 software can only provide service for those data services that are either supplied with the Sun Cluster product or set up with the Sun Cluster data services API.
Sun Cluster software currently does not have an HA Data Service for the sendmail(1M) subsystem. It is permitted to run sendmail on the individual cluster nodes, but the sendmail functionality will not be highly available, including the functionality of mail delivery and mail routing, queuing, or retry.
Do not configure cluster nodes as routers (gateways). If the system goes down, the clients cannot find an alternate router and cannot recover.
Do not configure cluster nodes as NIS or NIS+ servers. However, cluster nodes can be NIS or NIS+ clients.
Do not use a Sun Cluster configuration to provide a highly available boot or install service on client systems.
Do not use a Sun Cluster 3.1 configuration to provide an rarpd service.
Alternate Pathing (AP) is not supported.
If you are using a Sun EnterpriseTM 420R server with a PCI card in slot J4701, the motherboard must be at dash-level 15 or higher (501-5168-15 or higher). To find the motherboard part number and revision level, look at the edge of the board closest to PCI slot 1.
System panics have been observed in clusters when UDWIS I/O cards are used in slot 0 of a board in a Sun Enterprise 10000 server; do not install UDWIS I/O cards in slot 0 of a board in this server.
If you are upgrading from VERITAS Volume Manager (VxVM) 3.2 to 3.5, the Cluster Volume Manger (CVM) feature will not be available until you install the CVM license key for version 3.5. In VxVM 3.5, the CVM license key for version 3.2 does not enable CVM and must be upgraded to the CVM license key for version 3.5.
In Solstice DiskSuite/Solaris Volume Manager configurations that use mediators, the number of mediator hosts configured for a diskset must be exactly two.
DiskSuite Tool (Solstice DiskSuite metatool) and the Enhanced Storage module of Solaris Management Console (Solaris Volume Manager) are not compatible with Sun Cluster 3.1 software.
Use of VxVM Dynamic Multipathing (DMP) with Sun Cluster 3.1 software to manage multiple paths from the same node is not supported. From VxVM 3.2 onward it is no longer possible to disable the installation of DMP. But having it in the I/O stack on systems with only a single path per node poses no problems. However, if you use VxVM in a configuration with multiple paths per node, then you must use another multipathing solution, such as MPxIO or EMC PowerPath.
Simple root disk groups (rootdg created on a single slice of the root disk) are not supported as disk types with VxVM on Sun Cluster 3.1 software.
Software RAID 5 is not supported.
Quotas are not supported by Sun Cluster file systems.
The command umount -f behaves in the same manner as the umount command without the -f option. It does not support forced unmounts.
The command unlink (1M) is not supported on non-empty directories.
The command lockfs -d is not supported. Use lockfs -n as a workaround.
The cluster file system does not support any of the file-system features of Solaris software by which one would put a communication end-point in the file-system name space. Therefore, although you can create a UNIX domain socket whose name is a path name into the cluster file system, the socket would not survive a node failover. In addition, any fifos or named pipes you create on a cluster file system would not be globally accessible, nor should you attempt to use fattach from any node other than the local node.
It is not supported to execute binaries off file systems mounted by using the forcedirectio mount option.
The following VxFS features are not supported in a Sun Cluster 3.1 configuration.
Quick I/O
Snapshots
Storage checkpoints
Cache advisories (these can be used, but the effect will be observed on the given node only)
VERITAS CFS (requires VERITAS cluster feature and VCS)
All other VxFS features and options that are supported in a cluster configuration are supported by Sun Cluster 3.1 software. See VxFS documentation and man pages for details about VxFS options that are or are not supported in a cluster configuration.
The following VxFS-specific mount options are not supported in a Sun Cluster 3.1 configuration.
convosync (Convert O_SYNC)
mincache
qlog, delaylog, tmplog
For information about administering VxFS cluster file systems in a Sun Cluster configuration, see “Administering Cluster File Systems Overview” in Sun Cluster 3.1 System Administration Guide.
This section identifies any restrictions on using IP Network Multipathing that apply only in a Sun Cluster 3.1 environment, or are different than information provided in the Solaris documentation for IP Network Multipathing.
IPv6 is not supported.
All public network adapters must be in IP Network Multipathing groups.
In /etc/default/mpathd do not change TRACK_INTERFACES_ONLY_WITH_GROUPSfrom yes to no.
For knownbugs and issues, see Create IPMP Group Option Overwrites hostname.int (4731768) .
Most procedures, guidelines, and restrictions identified in the Solaris documentation for IP Network Multipathing are the same in a cluster or non-cluster environment. Therefore, see the appropriate Solaris document for additional information about IP Network Multipathing restrictions.
Operating Environment Release |
For Instructions, Go To... |
---|---|
Solaris 8 operating environment |
IP Network Multipathing Administration Guide |
Solaris 9 operating environment |
“IP Network Multipathing Topics” in System Administration Guide: IP Series |
There are no restrictions that apply to all data services. For information about restrictions for specific data services, see Sun Cluster 3.1 Data Service 5/03 Release Notes.
The Sun Cluster HA for Oracle 3.0 data service can run on Sun Cluster 3.1 only when used with the following versions of the Solaris operating environment:
Solaris 8, 32-bit version
Solaris 8, 64-bit version
Solaris 9, 32-bit version
The Sun Cluster HA for Oracle 3.0 data service cannot run on Sun Cluster 3.1 when used with the 64-bit version of Solaris 9.