Sun Cluster 3.2 Release Notes for Solaris OS

Documentation Issues

This section discusses errors or omissions for documentation, online help, or man pages in the Sun Cluster 3.2 release.

Concepts Guide

This section discusses error and omissions in the Sun Cluster Concepts Guide for Solaris OS.

x86: Sun Cluster Topologies for x86

In the section Sun Cluster Topologies for x86 in Sun Cluster Concepts Guide for Solaris OS, the following statement is out of date for the Sun Cluster 3.2 release: "Sun Cluster that is composed of x86 based systems supports two nodes in a cluster."

The statement should instead read as follows: "A Sun Cluster configuration that is composed of x86 based systems supports up to eight nodes in a cluster that runs Oracle RAC, or supports up to four nodes in a cluster that does not run Oracle RAC."

Software Installation Guide

This section discussion errors or omissions in the Sun Cluster Software Installation Guide for Solaris OS.

Missing Upgrade Preparation for Clusters that Run Sun Cluster Geographic Edition Software

If you upgrade a cluster that also runs Sun Cluster Geographic Edition software, there are additional preparation steps you must perform before you begin Sun Cluster software upgrade. These steps include shutting down the Sun Cluster Geographic Edition infrastructure. Go instead to Chapter 4, Upgrading the Sun Cluster Geographic Edition Software, in Sun Cluster Geographic Edition Installation Guide in Sun Cluster Geographic Edition Installation Guide. These procedures document when to return to the Sun Cluster Software Installation Guide to perform Sun Cluster software upgrade.

Sun Cluster Data Services Planning and Administration Guide

This section discusses error and omissions in the Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Support of Scalable Services on Non-Global Zones

In Resource Type Properties in Sun Cluster Data Services Planning and Administration Guide for Solaris OS, the description of the Failover resource property is missing a statement concerning support of scalable services on non-global zones. This support applies to resources for which the Failover property of the resource type is set to FALSE and the Scalable property of the resource is set to TRUE. This combination of property settings indicates a scalable service that uses a SharedAddress resource to do network load balancing. In the Sun Cluster 3.2 release, you can configure a scalable service of this type in a resource group that runs in a non-global zone. But you cannot configure a scalable service to run in multiple non-global zones on the same node.

Sun Cluster Data Service for MaxDB Guide

This section discusses error and omissions in the Sun Cluster Data Service for MaxDB Guide for Solaris OS.

Changes to Sun Cluster Data Service for MaxDB Support on Non-Global Zones on SPARC and x86 Based System

The Sun Cluster Data Service for MaxDB supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service MaxDB Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.

Sun Cluster Data Service for SAP Guide

This section discusses error and omissions in the Sun Cluster Data Service for SAP Guide for Solaris OS.

Changes to SAP Support on Non-Global Zones on SPARC and x86 Based System

The Sun Cluster Data Service for SAP supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service SAP Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.

Sun Cluster Data Service for SAP liveCache Guide

This section discusses error and omissions in the Sun Cluster Data Service for SAP liveCache Guide for Solaris OS.

Changes to SAP liveCache Support on Non-Global Zones on SPARC and x86 Based System

The Sun Cluster Data Service for SAP liveCache supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service SAP liveCache Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.

Sun Cluster Data Service for SAP Web Application Server Guide

This section discusses error and omissions in the Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS.

Support for SAP 7.0 for Sun Cluster HA for SAP Web Application Server (6461002)

In SAP 7.0 and NW2004SR1, when a SAP instance is started, the sapstartsrv process is started by default. The sapstartsrv process is not under the control of Sun Cluster HA for SAP Web Application Server. So, when a SAP instance is stopped or failed over by Sun Cluster HA for SAP Web Application Server, the sapstartsrv process is not stopped.

To avoid starting the sapstartsrv process when a SAP instance is started by Sun Cluster HA for SAP Web Application, you must modify the startsap script. In addition, rename the /etc/rc3.d/S90sapinit file to /etc/rc3.d/xxS90sapinit on all the Sun Cluster nodes.

Changes to SAP Web Application Server Support on Non-Global Zones on SPARC and x86 Based System

The Sun Cluster Data Service for SAP Web Application Server supports non-global zones on SPARC and x86 based systems. The following changes should be made to the Sun Cluster Data Service SAP Web Application Server Guide for this support. The following steps can be performed on a cluster that has been configured to run in global zones. If you are installing your cluster to run in non-global zones, a few of these steps might not be necessary as indicated below.

Setting Up the SAP Web Application Server on Non-Global Zones for HASP Configuration (6530281)

Use the following procedure to configure a HAStoragePlus resource for non-global zones.


Note –

ProcedureHow to Set Up the SAP Web Application Server on Non-Global Zones for HAStoragePlus Configuration

  1. On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Create the scalable resource group with non-global zones that contain the HAStoragePlus resource.


       # clresourcegroup create \
         -p Maximum_primaries=m\
         -p Desired_primaries=n\
        [-n node-zone-list] hasp-resource-group
    
    -p Maximum_primaries=m

    Specifies the maximum number of active primaries for the resource group.

    -p Desired_primaries=n

    Specifies the number of active primaries on which the resource group should attempt to start.

    -n node-zone-list

    In the node list of a HAStoragePlus resource group, specifies the list of nodename:zonename pairs as the node list of the HAStoragePlus resource group, where the SAP instances can come online.

    hasp-resource-group

    Specifies the name of the scalable resource group to be added. This name must begin with an ASCII character.

  3. Register the resource type for the HAStoragePlus resource.


    # clresourcetype register HAStoragePlus
  4. Create the HAStoragePlus resource hasp-resource and define the SAP filesystem mount points and global device paths.


     # clresource create -g hasp-resource-group -t SUNW.HAStoragePlus \
        -p GlobalDevicePaths=/dev/global/dsk/d5s2,dsk/d6 \
        -p affinityon=false -p 
    FilesystemMountPoints=/sapmnt/JSC,/usr/sap/trans,/usr/sap/JSC hasp-resource
    
    -g hasp-resource-group

    Specifies the resource group name.

    GlobalDevicePaths

    Contains the following values:

    • Global device group names, such as sap-dg, dsk/d5

    • Paths to global devices, such as /dev/global/dsk/d5s2, /dev/md/sap-dg/dsk/d6

    FilesystemMountPoints

    Contains the following values:

    • Mount points of local or cluster file systems, such as /local/mirrlogA,/local/mirrlogB,/sapmnt/JSC,/usr/sap/JSC

    The HAStoragePlus resource is created in the enabled state.

  5. Register the resource type for the SAP application.


    # clresourcetype register resource-type
    
    resource-type

    Specifies the name of the resource type to be added. For more information, see Supported Products.

  6. Create a SAP resource group.


      # clresourcegroup create [-n node-zone-list] -p 
    RG_affinities=++hastorageplus-rg resource-group-1
    
    resource-group-1

    Specifies the SAP services resource group.

  7. Add the SAP application resource to resource-group-1 and set the dependency to hastorageplus-1.


       # clresource create -g resource-group-1 -t SUNW.application \
         [-p "extension-property[{node-specifier}]"=value, ?] \
         -p Resource_dependencies=hastorageplus-1 resource
    
  8. Bring the failover resource group online.


    # clresourcegroup online resource-group-1
    

System Administration Guide

This section discusses error and omissions in the Sun Cluster System Administration Guide for Solaris OS.

Taking a Solaris Volume Manager Metaset From Nodes Booted in Non-Cluster Mode

ProcedureHow to Take a Solaris Volume Manager Metaset From Nodes Booted in Non-Cluster Mode

Use this procedure to run an application outside the cluster for testing purposes.

  1. Determine if the quorum device is used in the Solaris Volume Manager metaset, and determine if the quorum device uses scsi2 or scsi3 reservations.


    # clquorum show
    
    1. If the quorum device is in the Solaris Volume Manager metaset, add a new quorum device which is not part of the metaset to be taken later in non-cluster mode.


      # clquorum add did
      
    2. Remove the old quorum device.


      # clqorum remove did
      
    3. If the quorum device uses a scsi2 reservation, scrub the scsi2 reservation from the old quorum and verify that there are no scsi2 reservations remaining.


      # /usr/cluster/lib/sc/pgre -c pgre_scrub -d /dev/did/rdsk/dids2
      # /usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/dids2
      
  2. Evacuate the node you want to boot in non-cluster mode.


    # clresourcegroup evacuate -n targetnode
    
  3. Take offline any resource group or resource groups that contain HAStorage or HAStoragePlus resources and contain devices or file systems affected by the metaset you want to later take in non-cluster mode.


    # clresourcegroup offline resourcegroupname
    
  4. Disable all the resources in the resource groups you took offline.


    # clresource disable resourcename
    
  5. Unmanage the resource groups.


    # clresourcegroup unmanage resourcegroupname
    
  6. Take offline the corresponding device group or device groups.


    # cldevicegroup offline devicegroupname
    
  7. Disable the device group or device groups.


    # cldevicegroup disable devicegroupname
    
  8. Boot the passive node into non-cluster mode.


    # reboot -x
    
  9. Verify that the boot process has completed on the passive node before proceeding.

    • Solaris 9

      The login prompt will only appear after the boot process has completed, so no action is required.

    • Solaris 10


      # svcs -x
      
  10. Determine if there are any scsi3 reservations on the disks in the metaset or metasets. Perform the following commands on all disks in the metasets.


    # /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/dids2
    
  11. If there are any scsi3 reservations on the disks, scrub them.


    # /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/dids2
    
  12. Take the metaset on the evacuated node.


    # metaset -s name -C take -f
    
  13. Mount the filesystem or filesystems containing the defined device on the metaset.


    # mount device mountpoint
    
  14. Start the application and perform the desired test. After finishing the test, stop the application.

  15. Reboot the node and wait until the boot process has finished.


    # reboot
    
  16. Bring online the device group or device groups.


    # cldevicegroup online -e devicegroupname
    
  17. Start the resource group or resource groups.


    # clresourcegroup online -eM  resourcegroupname 
    

Using Solaris IP Filtering with Sun Cluster

Sun Cluster supports Solaris IP Filtering with the following restrictions:

ProcedureHow to Set Up Solaris IP Filtering

  1. In the /etc/iu.ap file, modify the public NIC entries to list clhbsndr pfil as the module list.

    The pfil must be the last module in the list.


    Note –

    If you have the same type of adapter for private and public network, your edits to the /etc/iu.ap file will push pfil to the private network streams. However, the cluster transport module will automatically remove all unwanted modules at stream creation, so pfil will be removed from the private network streams.


  2. To ensure that the IP filter works in non-cluster mode, update the /etc/ipf/pfil.ap file.

    Updates to the /etc/iu.ap file are slightly different. See the IP Filter documentation for more information.

  3. Reboot all affected nodes.

    You can boot the nodes in a rolling fashion.

  4. Add filter rules to the /etc/ipf/ipf.conf file on all affected nodes. For information on IP filter rules syntax, see ipf(4)

    Keep in mind the following guidelines and requirements when you add filter rules to Sun Cluster nodes.

    • Sun Cluster fails over network addresses from node to node. No special procedure or code is needed at the time of failover.

    • All filtering rules that reference IP addresses of logical hostname and shared address resources must be identical on all cluster nodes.

    • Rules on a standby node will reference a non-existent IP address. This rule is still part of the IP filter's active rule set and will become effective when the node receives the address after a failover.

    • All filtering rules must be the same for all NICs in the same IPMP group. In other words, if a rule is interface-specific, the same rule must also exist for all other interfaces in the same IPMP group.

  5. Enable the ipfilter SMF service.


    # svcadm enable /network/ipfilter:default
    

Data Services Developer's Guide

This section discusses errors and omissions in the Sun Cluster Data Services Developer’s Guide for Solaris OS.

Support of Certain Scalable Services on Non-Global Zones

In Resource Type Properties in Sun Cluster Data Services Developer’s Guide for Solaris OS, the description of the Failover resource property is missing a statement concerning support of scalable services on non-global zones. This support applies to resources for which the Failover property of the resource type is set to FALSE and the Scalable property of the resource is set to TRUE. This combination of property settings indicates a scalable service that uses a SharedAddress resource to do network load balancing. In the Sun Cluster 3.2 release, you can configure a scalable service of this type in a resource group that runs in a non-global zone. But you cannot configure a scalable service to run in multiple non-global zones on the same node.

Method Timeout Behavior is Changed

A description of the change in the behavior of method timeouts in the Sun Cluster 3.2 release is missing. If an RGM method callback times out, the process is now killed by using the SIGABRT signal instead of the SIGTERM signal. This causes all members of the process group to generate a core file.


Note –

Avoid writing a data-service method that creates a new process group. If your data service method does need to create a new process group, also write a signal handler for the SIGTERM and SIGABRT signals. Write the signal handlers to forward the SIGTERM or SIGABRT signal to the child process group before the signal handler terminates the parent process. This increases the likelihood that all processes that are spawned by the method are properly terminated.


CRNP Runs Only in the Global Zone

Chapter 12, Cluster Reconfiguration Notification Protocol, in Sun Cluster Data Services Developer’s Guide for Solaris OS is missing the statement that, on the Solaris 10 OS, the Cluster Reconfiguration Notification Protocol (CRNP) runs only in the global zone.

Required Solaris Software Group Statement is Unclear

In Setting Up the Development Environment for Writing a Data Service in Sun Cluster Data Services Developer’s Guide for Solaris OS, there is a Note that the Solaris software group Developer or Entire Distribution is required. This statement applies to the development machine. But because it is positioned after a statement about testing the data service on a cluster, it might be misread as a requirement for the cluster that the data service is being run on.

Quorum Server User's Guide

This section discusses errors and omissions in the Sun Cluster Quorum Server User’s Guide.

Supported Software and Hardware Platforms

The following installation requirements and guidelines are missing or unclear:

Man Pages

This section discusses errors, omissions, and additions in the Sun Cluster man pages.

ccp(1M)

The following revised Synopsis and added Options sections of the ccp(1M) man page document the addition of Secure Shell support to the Cluster Control Panel (CCP) utilities:

SYNOPSIS


$CLUSTER_HOME/bin/ccp [-s] [-l username] [-p ssh-port] {clustername | nodename}

OPTIONS

The following options are supported:

-l username

Specifies the user name for the ssh connection. This option is passed to the cconsole, crlogin, or cssh utility when the utility is launched from the CCP. The ctelnet utility ignores this option.

If the -l option is not specified, the user name that launched the CCP is effective.

-p ssh-port

Specifies the Secure Shell port number to use. This option is passed to the cssh utility when the utility is launched from the CCP. The cconsole, crlogin, and ctelnet utilities ignore this option.

If the -p option is not specified, the default port number 22 is used for secure connections.

-s

Specifies using Secure Shell connections to node consoles instead of telnet connections. This option is passed to the cconsole utility when the utility is launched from the CCP. The crlogin, cssh, and ctelnet utilities ignore this option.

If the -s option is not specified, the cconsole utility uses telnet connections to the consoles.

To override the -s option, deselect the Use SSH checkbox in the Options menu of the cconsole graphical user interface (GUI).

cconsole(1M), crlogin(1M), cssh(1M), and ctelnet(1M)

The following revised Synopsis and added Options sections of the combined cconsole, crlogin, cssh, and ctelnet man page document the addition of Secure Shell support to the Cluster Control Panel utilities:

SYNOPSIS


$CLUSTER_HOME/bin/cconsole [-s] [-l username] [clustername… | nodename…]
$CLUSTER_HOME/bin/crlogin [-l username] [clustername… | nodename…]
$CLUSTER_HOME/bin/cssh [-l username] [-p ssh-port] [clustername… | nodename…]
$CLUSTER_HOME/bin/ctelnet [clustername… | nodename…]

DESCRIPTION

cssh

This utility establishes Secure Shell connections directly to the cluster nodes.

OPTIONS

-l username

Specifies the ssh user name for the remote connections. This option is valid with the cconsole, crlogin, and cssh commands.

The argument value is remembered so that clusters and nodes that are specified later use the same user name when making connections.

If the -l option is not specified, the user name that launched the command is effective.

-p ssh-port

Specifies the Secure Shell port number to use. This option is valid with the cssh command.

If the -p option is not specified, the default port number 22 is used for secure connections.

-s

Specifies using Secure Shell connections instead of telnet connections to node consoles. This option is valid with the cconsole command.

If the -s option is not specified, the utility uses telnet connections to the consoles.

To override the -s option from the cconsole graphical user interface (GUI), deselect the Use SSH checkbox in the Options menu.

clnode(1CL)

clresource(1CL)

clresourcegroup(1CL)

r_properties(5)

rt_properties(5)

The description of the Failover resource-type property contains an incorrect statement concerning support of scalable services on non-global zones in the Sun Cluster 3.2 release. This applies to resources for which the Failover property of the resource type is set to FALSE and the Scalable property of the resource is set to TRUE.

serialports(4)

The following information is an addition to the Description section of the serialport(4) man page:

To support Secure Shell connections to node consoles, specify in the /etc/serialports file the name of the console-access device and the Secure Shell port number for each node. If you use the default Secure Shell configuration on the console-access device, specify port number 22.

SUNW.Event(5)

The SUNW.Event(5) man page is missing the statement that, on the Solaris 10 OS, the Cluster Reconfiguration Notification Protocol (CRNP) runs only in the global zone.