This section discusses errors or omissions for documentation, online help, or man pages in the Sun Cluster 3.2 release.
This section discusses error and omissions in the Sun Cluster Concepts Guide for Solaris OS.
In the section Sun Cluster Topologies for x86 in Sun Cluster Concepts Guide for Solaris OS, the following statement is out of date for the Sun Cluster 3.2 release: "Sun Cluster that is composed of x86 based systems supports two nodes in a cluster."
The statement should instead read as follows: "A Sun Cluster configuration that is composed of x86 based systems supports up to eight nodes in a cluster that runs Oracle RAC, or supports up to four nodes in a cluster that does not run Oracle RAC."
This section discussion errors or omissions in the Sun Cluster Software Installation Guide for Solaris OS.
If you upgrade a cluster that also runs Sun Cluster Geographic Edition software, there are additional preparation steps you must perform before you begin Sun Cluster software upgrade. These steps include shutting down the Sun Cluster Geographic Edition infrastructure. Go instead to Chapter 4, Upgrading the Sun Cluster Geographic Edition Software, in Sun Cluster Geographic Edition Installation Guide in Sun Cluster Geographic Edition Installation Guide. These procedures document when to return to the Sun Cluster Software Installation Guide to perform Sun Cluster software upgrade.
This section discusses error and omissions in the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
In Resource Type Properties in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, the description of the Failover resource property is missing a statement concerning support of scalable services on non-global zones. This support applies to resources for which the Failover property of the resource type is set to FALSE and the Scalable property of the resource is set to TRUE. This combination of property settings indicates a scalable service that uses a SharedAddress resource to do network load balancing. In the Sun Cluster 3.2 release, you can configure a scalable service of this type in a resource group that runs in a non-global zone. But you cannot configure a scalable service to run in multiple non-global zones on the same node.
This section discusses error and omissions in the Sun Cluster Data Service for MaxDB Guide for Solaris OS.
The Sun Cluster Data Service for MaxDB supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service MaxDB Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.
On each zone, ensure that all of the network resources are present in the /etc/hosts file to avoid any failures because of name service lookup.
On each zone, create an entry for the MaxDB group in the /etc/group file, and add potential users to the group.
On each zone, create an entry for the MaxDB user ID.
Use the following command to update the /etc/passwd and /etc/shadow files with an entry for the user ID.
# useradd -u uid -g group -d /sap-home maxdb user |
Create mount point directories in the zones where MaxDB could potentially run.
Configure the /etc/nsswitch.conf file so that Sun Cluster HA for MaxDB starts and stops correctly in the event of a switchover or a failover.
On each zone update /etc/services file with all necessary MaxDB ports obtained from the global zones /etc/services. This step might not be necessary for Max DB that is installed in non-global zones.
Copy /etc/opt/sdb from the global zone to all local zone nodes. This step might not be necessary for MaxDB that is being installed in non-global zones.
Copy /var/spool/sql from the global zone to all local zone nodes. This step might not be necessary for MaxDB that is being installed in non-global zones.
On x86 based systems only, execute crle -64 -u -l /sapmnt/MaxDBSystemName/exe on all local zones that will run MaxDB.
This section discusses error and omissions in the Sun Cluster Data Service for SAP Guide for Solaris OS.
The Sun Cluster Data Service for SAP supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service SAP Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.
On each zone, ensure that all of the network resources are present in the /etc/hosts file to avoid any failures because of name service lookup.
On each zone, create an entry for the SAP group in the /etc/group file, and add potential users to the group.
On each zone, create an entry for the SAP user ID.
Use the following command to update the /etc/passwd and /etc/shadow files with an entry for the user ID.
# useradd -u uid -g group -d /sap-home sap user |
Create mount point directories in the zones where SAP could potentially run.
Configure the /etc/nsswitch.conf file so that Sun Cluster HA for SAP starts and stops correctly in the event of a switchover or a failover.
On each zone update /etc/services file with all necessary SAP ports obtained from the global zones /etc/services. This step might not be necessary for SAP that is being installed in non-global zones.
On x86 based systems only, execute crle -64 -u -l /sapmnt/SAPSystemName/exe on all local zones that will run SAP.
This section discusses error and omissions in the Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.
The Sun Cluster Data Service for SAP liveCache supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service SAP liveCache Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.
On each zone, ensure that all of the network resources are present in the /etc/hosts file to avoid any failures because of name service lookup.
On each zone, create an entry for the SAP liveCache group in the /etc/group file, and add potential users to the group.
On each zone, create an entry for the SAP liveCache user ID.
Use the following command to update the /etc/passwd and /etc/shadow files with an entry for the user ID.
# useradd -u uid -g group -d /sap-home sap user |
Create mount point directories in the zones where SAP liveCache could potentially run.
Configure the /etc/nsswitch.conf file so that Sun Cluster HA for SAP liveCache starts and stops correctly in the event of a switchover or a failover.
On each zone update /etc/services file with all necessary SAP liveCache ports obtained from the global zones /etc/services. This step might not be necessary for SAP liveCache that is being installed in non-global zones.
Copy /etc/opt/sdb from the global zone to all local zone nodes. This step might not be necessary for SAP liveCache that is being installed in non-global zones.
Copy /var/spool/sql from the global zone to all local zone nodes. This step might not be necessary for SAP liveCache that is being installed in non-global zones.
On x86 based systems only, execute crle -64 -u -l /sapmnt/SAPSystemName/exe on all local zones that will run SAP liveCache.
This section discusses error and omissions in the Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.
In SAP 7.0 and NW2004SR1, when a SAP instance is started, the sapstartsrv process is started by default. The sapstartsrv process is not under the control of Sun Cluster HA for SAP Web Application Server. So, when a SAP instance is stopped or failed over by Sun Cluster HA for SAP Web Application Server, the sapstartsrv process is not stopped.
To avoid starting the sapstartsrv process when a SAP instance is started by Sun Cluster HA for SAP Web Application, you must modify the startsap script. In addition, rename the /etc/rc3.d/S90sapinit file to /etc/rc3.d/xxS90sapinit on all the Sun Cluster nodes.
The Sun Cluster Data Service for SAP Web Application Server supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service SAP Web Application Server Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.
On each zone, ensure that all of the network resources are present in the /etc/hosts file to avoid any failures because of name service lookup.
On each zone, create an entry for the SAP Web Application Server group in the /etc/group file, and add potential users to the group.
On each zone, create an entry for the SAP Web Application Server user ID.
Use the following command to update the /etc/passwd and /etc/shadow files with an entry for the user ID.
# useradd -u uid -g group -d /sap-home sap user |
Create mount point directories in the zones where SAP Web Application Server could potentially run.
Configure the /etc/nsswitch.conf file so that Sun Cluster HA for SAP starts and stops correctly in the event of a switchover or a failover.
On each zone update /etc/services file with all necessary SAP ports obtained from the global zones /etc/services. This step might not be necessary for SAP Web Application Server that is being installed in non-global zones.
On x86 based systems only, execute crle -64 -u -l /sapmnt/SAPSystemName/exe on all local zones that will run SAP.
Use the following procedure to configure a HAStoragePlus resource for non-global zones.
The entries in the /etc/vfstab file for cluster file systems should contain the global keyword in the mount options.
The SAP binaries that will be made highly available using the HAStoragePlus resource should be accessible from the non-global zones.
In non-global zones, file systems that are used by different resources in different resource groups must reside in a single HAStoragePlus resource that resides in a scalable resource group. The nodelist of the scalable HAStoragePlus resource group must be a superset of the nodelists of the application resource groups that have resources which depend on the file systems. These application resources that depend on the file systems must have a strong resource dependency set to the HAStoragePlus resource. In addition, the dependent application resource group must have a strong positive resource group affinity set to the scalable HAStoragePlus resource group.
On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Create the scalable resource group with non-global zones that contain the HAStoragePlus resource.
# clresourcegroup create \ -p Maximum_primaries=m\ -p Desired_primaries=n\ [-n node-zone-list] hasp-resource-group |
Specifies the maximum number of active primaries for the resource group.
Specifies the number of active primaries on which the resource group should attempt to start.
In the node list of a HAStoragePlus resource group, specifies the list of nodename:zonename pairs as the node list of the HAStoragePlus resource group, where the SAP instances can come online.
Specifies the name of the scalable resource group to be added. This name must begin with an ASCII character.
Register the resource type for the HAStoragePlus resource.
# clresourcetype register HAStoragePlus |
Create the HAStoragePlus resource hasp-resource and define the SAP filesystem mount points and global device paths.
# clresource create -g hasp-resource-group -t SUNW.HAStoragePlus \ -p GlobalDevicePaths=/dev/global/dsk/d5s2,dsk/d6 \ -p affinityon=false -p FilesystemMountPoints=/sapmnt/JSC,/usr/sap/trans,/usr/sap/JSC hasp-resource |
Specifies the resource group name.
Contains the following values:
Global device group names, such as sap-dg, dsk/d5
Paths to global devices, such as /dev/global/dsk/d5s2, /dev/md/sap-dg/dsk/d6
Contains the following values:
Mount points of local or cluster file systems, such as /local/mirrlogA,/local/mirrlogB,/sapmnt/JSC,/usr/sap/JSC
The HAStoragePlus resource is created in the enabled state.
Register the resource type for the SAP application.
# clresourcetype register resource-type |
Specifies the name of the resource type to be added. For more information, see Supported Products.
Create a SAP resource group.
# clresourcegroup create [-n node-zone-list] -p RG_affinities=++hastorageplus-rg resource-group-1 |
Specifies the SAP services resource group.
Add the SAP application resource to resource-group-1 and set the dependency to hastorageplus-1.
# clresource create -g resource-group-1 -t SUNW.application \ [-p "extension-property[{node-specifier}]"=value, ?] \ -p Resource_dependencies=hastorageplus-1 resource |
Bring the failover resource group online.
# clresourcegroup online resource-group-1 |
This section discusses error and omissions in the Sun Cluster System Administration Guide for Solaris OS.
Use this procedure to run an application outside the cluster for testing purposes.
Determine if the quorum device is used in the Solaris Volume Manager metaset, and determine if the quorum device uses scsi2 or scsi3 reservations.
# clquorum show |
If the quorum device is in the Solaris Volume Manager metaset, add a new quorum device which is not part of the metaset to be taken later in non-cluster mode.
# clquorum add did |
Remove the old quorum device.
# clqorum remove did |
If the quorum device uses a scsi2 reservation, scrub the scsi2 reservation from the old quorum and verify that there are no scsi2 reservations remaining.
# /usr/cluster/lib/sc/pgre -c pgre_scrub -d /dev/did/rdsk/dids2 # /usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/dids2 |
Evacuate the node you want to boot in non-cluster mode.
# clresourcegroup evacuate -n targetnode |
Take offline any resource group or resource groups that contain HAStorage or HAStoragePlus resources and contain devices or file systems affected by the metaset you want to later take in non-cluster mode.
# clresourcegroup offline resourcegroupname |
Disable all the resources in the resource groups you took offline.
# clresource disable resourcename |
Unmanage the resource groups.
# clresourcegroup unmanage resourcegroupname |
Take offline the corresponding device group or device groups.
# cldevicegroup offline devicegroupname |
Disable the device group or device groups.
# cldevicegroup disable devicegroupname |
Boot the passive node into non-cluster mode.
# reboot -x |
Verify that the boot process has completed on the passive node before proceeding.
Solaris 9
The login prompt will only appear after the boot process has completed, so no action is required.
Solaris 10
# svcs -x |
Determine if there are any scsi3 reservations on the disks in the metaset or metasets. Perform the following commands on all disks in the metasets.
# /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dids2 |
If there are any scsi3 reservations on the disks, scrub them.
# /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/dids2 |
Take the metaset on the evacuated node.
# metaset -s name -C take -f |
Mount the filesystem or filesystems containing the defined device on the metaset.
# mount device mountpoint |
Start the application and perform the desired test. After finishing the test, stop the application.
Reboot the node and wait until the boot process has finished.
# reboot |
Bring online the device group or device groups.
# cldevicegroup online -e devicegroupname |
Start the resource group or resource groups.
# clresourcegroup online -eM resourcegroupname |
Sun Cluster supports Solaris IP Filtering with the following restrictions:
Only failover data services are supported.
Sun Cluster does not support IP Filtering with scalable data services.
Only stateless filtering is supported.
NAT routing is not supported.
Use of NAT for translation of local addresses is supported. NAT translation rewrites packets on-the-wire and is therefore transparent to the cluster software.
In the /etc/iu.ap file, modify the public NIC entries to list clhbsndr pfil as the module list.
The pfil must be the last module in the list.
If you have the same type of adapter for private and public network, your edits to the /etc/iu.ap file will push pfil to the private network streams. However, the cluster transport module will automatically remove all unwanted modules at stream creation, so pfil will be removed from the private network streams.
To ensure that the IP filter works in non-cluster mode, update the /etc/ipf/pfil.ap file.
Updates to the /etc/iu.ap file are slightly different. See the IP Filter documentation for more information.
Reboot all affected nodes.
You can boot the nodes in a rolling fashion.
Add filter rules to the /etc/ipf/ipf.conf file on all affected nodes. For information on IP filter rules syntax, see ipf(4)
Keep in mind the following guidelines and requirements when you add filter rules to Sun Cluster nodes.
Sun Cluster fails over network addresses from node to node. No special procedure or code is needed at the time of failover.
All filtering rules that reference IP addresses of logical hostname and shared address resources must be identical on all cluster nodes.
Rules on a standby node will reference a non-existent IP address. This rule is still part of the IP filter's active rule set and will become effective when the node receives the address after a failover.
All filtering rules must be the same for all NICs in the same IPMP group. In other words, if a rule is interface-specific, the same rule must also exist for all other interfaces in the same IPMP group.
Enable the ipfilter SMF service.
# svcadm enable /network/ipfilter:default |
This section discusses errors and omissions in the Sun Cluster Data Services Developer’s Guide for Solaris OS.
In Resource Type Properties in Sun Cluster Data Services Developer’s Guide for Solaris OS, the description of the Failover resource property is missing a statement concerning support of scalable services on non-global zones. This support applies to resources for which the Failover property of the resource type is set to FALSE and the Scalable property of the resource is set to TRUE. This combination of property settings indicates a scalable service that uses a SharedAddress resource to do network load balancing. In the Sun Cluster 3.2 release, you can configure a scalable service of this type in a resource group that runs in a non-global zone. But you cannot configure a scalable service to run in multiple non-global zones on the same node.
A description of the change in the behavior of method timeouts in the Sun Cluster 3.2 release is missing. If an RGM method callback times out, the process is now killed by using the SIGABRT signal instead of the SIGTERM signal. This causes all members of the process group to generate a core file.
Avoid writing a data-service method that creates a new process group. If your data service method does need to create a new process group, also write a signal handler for the SIGTERM and SIGABRT signals. Write the signal handlers to forward the SIGTERM or SIGABRT signal to the child process group before the signal handler terminates the parent process. This increases the likelihood that all processes that are spawned by the method are properly terminated.
Chapter 12, Cluster Reconfiguration Notification Protocol, in Sun Cluster Data Services Developer’s Guide for Solaris OS is missing the statement that, on the Solaris 10 OS, the Cluster Reconfiguration Notification Protocol (CRNP) runs only in the global zone.
In Setting Up the Development Environment for Writing a Data Service in Sun Cluster Data Services Developer’s Guide for Solaris OS, there is a Note that the Solaris software group Developer or Entire Distribution is required. This statement applies to the development machine. But because it is positioned after a statement about testing the data service on a cluster, it might be misread as a requirement for the cluster that the data service is being run on.
This section discusses errors and omissions in the Sun Cluster Quorum Server User’s Guide.
The following installation requirements and guidelines are missing or unclear:
Solaris software requirements for Sun Cluster software apply as well to Quorum Server software.
The supported hardware platforms for a quorum server are the same as for a cluster node.
A quorum server does not have to be configured on the same hardware and software platform as the cluster or clusters that it provides quorum to. For example, an x86 based machine that runs the Solaris 9 OS can be configured as a quorum server for a SPARC based cluster that runs the Solaris 10 OS.
A quorum server can be configured on a cluster node to provide quorum for clusters other than the cluster that the node belongs to. However, a quorum server that is configured on a cluster node is not highly available.
This section discusses errors, omissions, and additions in the Sun Cluster man pages.
The following revised Synopsis and added Options sections of the ccp(1M) man page document the addition of Secure Shell support to the Cluster Control Panel (CCP) utilities:
SYNOPSIS
$CLUSTER_HOME/bin/ccp [-s] [-l username] [-p ssh-port] {clustername | nodename} |
OPTIONS
The following options are supported:
Specifies the user name for the ssh connection. This option is passed to the cconsole, crlogin, or cssh utility when the utility is launched from the CCP. The ctelnet utility ignores this option.
If the -l option is not specified, the user name that launched the CCP is effective.
Specifies the Secure Shell port number to use. This option is passed to the cssh utility when the utility is launched from the CCP. The cconsole, crlogin, and ctelnet utilities ignore this option.
If the -p option is not specified, the default port number 22 is used for secure connections.
Specifies using Secure Shell connections to node consoles instead of telnet connections. This option is passed to the cconsole utility when the utility is launched from the CCP. The crlogin, cssh, and ctelnet utilities ignore this option.
If the -s option is not specified, the cconsole utility uses telnet connections to the consoles.
To override the -s option, deselect the Use SSH checkbox in the Options menu of the cconsole graphical user interface (GUI).
The following revised Synopsis and added Options sections of the combined cconsole, crlogin, cssh, and ctelnet man page document the addition of Secure Shell support to the Cluster Control Panel utilities:
SYNOPSIS
$CLUSTER_HOME/bin/cconsole [-s] [-l username] [clustername… | nodename…] $CLUSTER_HOME/bin/crlogin [-l username] [clustername… | nodename…] $CLUSTER_HOME/bin/cssh [-l username] [-p ssh-port] [clustername… | nodename…] $CLUSTER_HOME/bin/ctelnet [clustername… | nodename…] |
DESCRIPTION
This utility establishes Secure Shell connections directly to the cluster nodes.
OPTIONS
Specifies the ssh user name for the remote connections. This option is valid with the cconsole, crlogin, and cssh commands.
The argument value is remembered so that clusters and nodes that are specified later use the same user name when making connections.
If the -l option is not specified, the user name that launched the command is effective.
Specifies the Secure Shell port number to use. This option is valid with the cssh command.
If the -p option is not specified, the default port number 22 is used for secure connections.
Specifies using Secure Shell connections instead of telnet connections to node consoles. This option is valid with the cconsole command.
If the -s option is not specified, the utility uses telnet connections to the consoles.
To override the -s option from the cconsole graphical user interface (GUI), deselect the Use SSH checkbox in the Options menu.
The description of the remove subcommand implies that the command will not work when certain conditions exist. Instead, the command will execute in these conditions but the results might adversely affect the cluster. The following is a more accurate description of the remove subcommand requirements and behavior:
To remove a node from a cluster, observe the following guidelines. If you do not observe these guidelines, the removal of a node might compromise quorum in the cluster.
Unconfigure the node to be removed from any quorum devices, unless you also specify the -f option.
Ensure that the node to be removed is not an active cluster member.
Do not remove a node from a three-node cluster unless at least one shared quorum device is configured.
The clnode remove command attempts to remove a subset of references to the node from the cluster configuration database. If the -f option is also specified, the subcommand attempts to remove all references to the node.
Before you can successfully use the clnode remove command to remove a node from the cluster, you must first use the claccess add command to add the node to the cluster authentication list, if it is not already in the list. Use the claccess list or claccess show command to view the current cluster authentication list. Afterwards, for security use the claccess deny-all command to prevent further access of the cluster configuration by any cluster node. For more information, see the claccess(1CL) man page.
The following option is missing from the clresource(1CL) man page:
Specifies that the command operates on resources whose resource group is suspended, if you specify the + operand. If you do not also specify the u option when you specify the + operand, the command ignores all resources whose resource group is suspended.
The -u option is valid when the + operand is specified to the clear, disable, enable, monitor, set, and unmonitor subcommands.
The description of the + operand should state that, when used with the clear, disable, enable, monitor, set, or unmonitor subcommand, the command ignores all resources whose resource group is suspended, unless you also specify the -u option.
The example provided in the definitions of the + and - operands for the -p, -x, and -y options are incorrect. The definitions should be as follows:
Adds a value or values to a string array value. Only the set subcommand accepts this operator. You can specify this operator only for the properties that accept lists of string values, for example Resource_dependencies.
Deletes a value or values from a string array value. Only the set subcommand accepts this operator. You can specify this operator only for properties that accept lists of string values, for example Resource_dependencies.
The command syntax and description for the evacuate subcommand incorrectly states that you can evacuate more than one node or zone in the same command invocation. Instead, you can specify only one node or zone in the evacuate command
The following option is missing from the clresourcegroup(1CL) man page:
Specifies that the command operate on suspended resource groups, if you specify the + operand. If you do not also specify the u option when you specify the + operand, the command ignores all suspended resource groups.
The -u option is valid when the + operand is specified to the add-node, manage, offline, online, quiesce, remaster, remove-node, restart, set, switch, and unmanage subcommands.
The description of the + operand should state that, when used with the add-node, manage, offline, online, quiesce, remaster, remove-node, restart, set, switch, or unmanage subcommand, the command ignores all suspended resource groups, unless you also specify the -u option.
The use of the Network_resources_used property has changed in the Sun Cluster 3.2 release. If you do not assign a value to this property, its value is updated automatically by the RGM, based on the setting of the resource-dependencies properties. You do not need to set this property directly. Instead, set the Resource_dependencies, Resource_dependencies_offline_restart, Resource_dependencies_restart, or Resource_dependencies_weakproperty.
To maintain compatibility with earlier releases of Sun Cluster software, you can still set the value of the Network_resources_used property directly. If you do, the value of the Network_resources_used property is no longer derived from the settings of the resource-dependencies properties.
If you add a resource name to the Network_resources_used property, the resource name is automatically added to the Resource_dependencies property as well. The only way to remove that dependency is to remove it from the Network_resources_used property. If you are not sure whether a network-resource dependency was originally added to the Resource_dependencies property or to the Network_resources_used property, remove the dependency from both properties. For example, the following command removes a dependency of resource r1 upon network resource r2, regardless of whether the dependency was added to the Network_resources_used property or to the Resource_dependencies property:
# clresource set -p Network_resources_used-=r2 -p Resource_dependencies-=r2 r1 |
The r_properties(5) man page contains incorrect descriptions of the Resource_dependencies, Resource_dependencies_offline_restart, Resource_dependencies_restart, and Resource_dependencies_weak properties. For correct descriptions of these properties, instead see Resource Properties in Sun Cluster Data Services Developer’s Guide for Solaris OS.
The description of the Scalable resource property is missing a statement concerning support of scalable services on non-global zones. This support applies to resources for which the Failover property of the resource type is set to FALSE and the Scalable property of the resource is set to TRUE. This combination of property settings indicates a scalable service that uses a SharedAddress resource to do network load balancing. In the Sun Cluster 3.2 release, you can configure a scalable service of this type in a resource group that runs in a non-global zone. But you cannot configure a scalable service to run in multiple non-global zones on the same node.
The description of the Failover resource-type property contains an incorrect statement concerning support of scalable services on non-global zones in the Sun Cluster 3.2 release. This applies to resources for which the Failover property of the resource type is set to FALSE and the Scalable property of the resource is set to TRUE.
Incorrect: You cannot use a scalable service of this type in zones.
Correct: You can configure a scalable service of this type in a resource group that runs in a non-global zone. But you cannot configure a scalable service to run in multiple non-global zones on the same node.
The following information is an addition to the Description section of the serialport(4) man page:
To support Secure Shell connections to node consoles, specify in the /etc/serialports file the name of the console-access device and the Secure Shell port number for each node. If you use the default Secure Shell configuration on the console-access device, specify port number 22.
The SUNW.Event(5) man page is missing the statement that, on the Solaris 10 OS, the Cluster Reconfiguration Notification Protocol (CRNP) runs only in the global zone.