Sun Cluster 3.0 U1 Release Notes Supplement

Chapter 1 Sun Cluster 3.0 U1 Release Notes Supplement

This document supplements the standard user documentation, including the Sun Cluster 3.0 U1 Release Notes shipped with the SunTM Cluster 3.0 product. These "online release notes" provide the most current information on the Sun Cluster 3.0 product. This document includes the following information:

Revision Record

The following table lists the information contained in this document and provides the revision date for this information.

Table 1-1 Sun Cluster 3.0 U1 Release Notes Supplement

Revision Date 

Information Added 

November 2001 

 

 

 

 

 

Added SunPlex Agent Builder license terms. See "SunPlex Agent Builder License Terms".

Added new man-page path for VxVM 3.1.1 and later versions. See "Volume Manager Restrictions and Requirements".

Changed the number of Bug ID 4480277 to the correct number, 4406523. See "Bug ID 4406523".

Patch 110651-04, which added support for HA Oracle 9i, has been declared bad. See "Bug ID 4515780".

Added a documentation bug discovered in the "Required SAP Patches for Sun Cluster HA for SAP" section of the Release Notes. See "Release Notes".

Added Appendix to document procedures for clustering Sun StorEdge 9910 and Sun StorEdge 9960 arrays. See Appendix E, Installing and Maintaining a Sun StorEdge 9910 or StorEdge 9960 Array.

October 2001 

Added special requirement for Data Services. See "Data Service Special Requirements".

September 2001 

 

 

 

 

 

 

Revised /etc/serialports setup instruction to support Sun FireTM servers. See "Cluster Control Panel Support for Sun Fire Servers".

Support for HA Oracle 9i. See "Support for Oracle 9i".

Guideline for using VxVM 3.2 Enclosure-Based Naming in a Sun Cluster environment. See "Volume Manager Restrictions and Requirements".

Support for Sun StorEdgeTM Network Data Replicator (Sun SNDR) 3.0 and Sun StorEdge Instant Image 3.0. See "Sun Cluster 3.0 Update 1 With Sun StorEdge 3.0 Services Software Restrictions and Requirements".

Updated Sun StorEdge T3 partner-group cluster procedures to fix problems and to support the new T3+ product. See Appendix B, Installing and Maintaining a Sun StorEdge T3 and T3+ Disk Tray Partner-Group Configuration.

Updated Sun StorEdge T3 single-controller cluster procedures to fix problems and to support the new T3+ product. See Appendix C, Installing and Maintaining a Sun StorEdge T3 or T3+ Disk Tray Single-Controller Configuration.

Updated NetraTM D130 cluster procedures to include support for the StorEdge S1 product. See Appendix D, Installing and Maintaining the Netra D130 and StorEdge S1 Enclosures.

July 2001 

Support for iPlanetTM Web Server 6.0. See "Support for iPlanet Web Server 6.0".

"Bug ID 4349995", restriction for using Solstice DiskSuite metatool.

"Bug ID 4388265", errors while replacing a SCSI cable in a Sun StorEdge A3500 disk array.

"Bug ID 4410535", workaround for Sun Management Center if you cannot add a previously deleted resource group.

"Bug ID 4406523", a panicked volume manager slave node that attempts to rejoin the cluster might cause the volume manager master node to crash. (This bug was formerly reported under Bug ID 4480277.)

Description of a known documentation error in the Sun Cluster 3.0 U1 Hardware Guide. See "Known Documentation Problems".

Appendix about installing and using the Sun Cluster module with the Sun Management Center 3.0 graphical user interface. See Appendix A, Sun Management Center 3.0.

Appendix to document Sun StorEdge T3 partner-group configurations for clustering. See Appendix B, Installing and Maintaining a Sun StorEdge T3 and T3+ Disk Tray Partner-Group Configuration.

Appendix to document Netra D130 storage unit procedures for clustering. See Appendix D, Installing and Maintaining the Netra D130 and StorEdge S1 Enclosures.

New Features

In addition to features documented in Sun Cluster 3.0 U1 Release Notes, this release now includes support for the following features.

Support for Sun SNDR 3.0 and Sun StorEdge Instant Image 3.0

Sun Cluster 3.0 Update 1 now supports Sun StorEdge Network Data Replicator (Sun SNDR) 3.0 and Sun StorEdge Instant Image 3.0 as cluster-aware products. These software products are part of the Sun StorEdge Version 3.0 software package. Sun StorEdge Fast Write Cache, which is also a part of the Sun StorEdge Version 3.0 software package, is not supported in any Sun Cluster environment.

The Sun SNDR software is a data replication application that provides access to data as part of business continuance and disaster recovery plans. The Sun StorEdge Instant Image software is a point-in-time copy application that enables you to create copies of application or test data. These software products are now cluster aware and will fail over and switch back in a Sun Cluster environment. For more information, see the following documents, available online at http://docs.sun.com.

There are some restrictions that apply to Sun Cluster 3.0 Update 1 only when using Sun SNDR or Sun StorEdge Instant Image. See "Sun Cluster 3.0 Update 1 With Sun StorEdge 3.0 Services Software Restrictions and Requirements" for more information.

Sun SNDR 3.0 and Sun StorEdge Instant Image 3.0 Support Patches

The following patches are required to support Sun SNDR 3.0 and Sun StorEdge Instant Image 3.0 with Sun Cluster 3.0 Update 1. Patches are available from SunSolve at http://sunsolve.sun.com/.

111945-xx

Storage Cache Manager

111946-xx

Storage Volume Driver

111947-xx

Sun StorEdge Instant Image

111948-xx

Sun SNDR

Support for iPlanet Web Server 6.0

Sun Cluster 3.0 Update 1 now supports iPlanet Web Server 6.0.

Two procedures have changed for iPlanet Web Server 6.0.

Installing Certificates on Secure Instances of iPlanet Web Server 6.0

The procedure for installing certificates on secure instances of iPlanet Web Server has changed for version 6.0. If you plan to run secure instances of iPlanet Web Server 6.0, complete the following steps when you install security certificates. This installation procedure requires that you create a certificate on one node, and then create symbolic links to that certificate on the other cluster nodes.

  1. Run the administrative server on node1.

  2. From your Web browser, connect to the administrative server as http://node1.domain:port.

    For example, http://phys-schost-1.eng.sun.com:8888. Use whatever port number you specified as the administrative server port during installation. The default port number is 8888.

  3. Install the certificate on node1.

    This installation creates three certificate files. One file, secmod.db, is common to all nodes, and the other two are specific to node1. These files are located in the alias subdirectory, under the directory in which the iPlanet Web Server files are installed.

  4. If you installed iPlanet Web Server on a global file system, complete the following tasks. If you installed iPlanet Web Server on a local file system, go to Step 5.

    1. Note the location and file names for the three files created when installing the certificate in Step 3.

      For example, if you installed iPlanet Web Server in /global/iws/servers, and you used the IP address "IPx" when installing the certificate, then the paths to the files on node1 would be

      /global/iws/servers/alias/secmod.db

      /global/iws/servers/alias/https-IPx-node1-cert7.db

      /global/iws/servers/alias/https-IPx-node1-key3.db

    2. Create symbolic links for all other cluster nodes to the node-specific files for node1.

      In the following example, substitute the appropriate file paths for your system.


      # ln -s /global/iws/servers/alias/https-IPx-node1-cert7.db
              /global/iws/servers/alias/https-IPx-node2-cert7.db 
      # ln -s /global/iws/servers/alias/https-IPx-node1-key3.db
              /global/iws/servers/alias/https-IPx-node2-key3.db 
      

  5. If you installed iPlanet Web Server on a local file system, complete the following tasks.

    1. Note the location and file names for the three files created on node1 when installing the certificate in Step 3.

      For example, if you installed iPlanet Web Server in /local/iws/servers, and you used the IP address "IPx" when installing the certificate, then the paths to the files on node1 would be

      /local/iws/servers/alias/secmod.db

      /local/iws/servers/alias/https-IPx-node1-cert7.db

      /local/iws/servers/alias/https-IPx-node1-key3.db

    2. Move the three certificate files to a location on the global file system.

      In the following example, substitute the appropriate file paths for your system


      # mv /local/iws/servers/alias/secmod.db
           /global/secure/secmod.db
      # mv /local/iws/servers/alias/https-IPx-node1-cert7.db 
           /global/secure/https-IPx-node1-cert7.db
      # mv /local/iws/servers/alias/https-IPx-node1-key3.db 
           /global/secure/https-IPx-node1-key3.db
      

    3. Create symbolic links between the local and global paths of the three certificate files.

      Create the symbolic links on each node in the cluster.

      In the following example, substitute the appropriate file paths for your system.


      # Symbolic links for node1
      # ln -s /global/secure/secmod.db
              /local/iws/servers/alias/secmod.db 
      # ln -s /global/secure/https-IPx-node1-cert7.db
              /local/iws/servers/alias/https-IPx-node1-cert7.db
      # ln -s /global/secure/https-IPx-node1-key3.db
              /local/iws/servers/alias/https-IPx-node1-key3.db 
      
      # Symbolic links for node2
      # ln -s /global/secure/secmod.db
              /local/iws/servers/alias/secmod.db 
      # ln -s /global/secure/https-IPx-node1-cert7.db
              /local/iws/servers/alias/https-IPx-node2-cert7.db 
      # ln -s /global/secure/https-IPx-node1-key3.db
              /local/iws/servers/alias/https-IPx-node2-key3.db 
      

Specifying the Location of the Access Logs

The procedure for specifying the location of the access logs while configuring an iPlanet Web Server has changed for iPlanet Web Server 6.0. To specify the location of the access logs while configuring an iPlanet Web Server, complete the following steps.

This change replaces Step 6 through Step 8 in the procedure "How to Configure an iPlanet Web Server" in Chapter 3, "Installing and Configuring Sun Cluster HA for iPlanet Web Sever," in the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide.

  1. Edit the ErrorLog, PidLog, and access log entries in the magnus.conf file to reflect the directory created in Step 5 of the "How to Configure an iPlanet Web Server" procedure in Chapter 3 of the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide, and synchronize the changes from the administrator's interface.

    The magnus.conf file specifies the locations for the error, access, and PID files. Edit this file to change the error, access, and PID file locations to the directory that you created in Step 5 of the "How to Configure an iPlanet Web Server" procedure in Chapter 3 of the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide. The magnus.conf file is located in the config directory of the iPlanet server instance. If the instance directory is located on the local file system, you must modify the magnus.conf file on each of the nodes.

    Change the following entries:


    ErrorLog /global/data/netscape/https-schost-1/logs/error
    PidLog /global/data/netscape/https-schost-1/logs/pid
    ...
    Init fn=flex-init access="$accesslog" ...
    

    to


    ErrorLog /var/pathname/http-instance/logs/error
    PidLog /var/pathname/http-instance/logs/pid
    ...
    Init fn=flex-init access="/var/pathname/http-instance/logs/access" ...
    

    As soon as the administrator's interface detects your changes, the interface displays a warning message, as follows.


    Warning: Manual edits not loaded
    Some configuration files have been edited by hand. Use the "Apply"
    button on the upper right side of the screen to load the latest
     configuration files.
  2. Click Apply as prompted.

    The administrator's interface displays a new web page.

  3. Click Load Configuration Files.

Cluster Control Panel Support for Sun Fire Servers

If you intend to use Cluster Control Panel software tools, such as cconsole, to connect to a Sun Fire system, use the following instruction to create the /etc/serialports file on your administrative console. This instruction replaces Step 8 of the procedure "How to Install Cluster Control Panel Software on the Administrative Console" in the Sun Cluster 3.0 U1 Installation Guide.

8. Create an /etc/serialports file.

Add an entry for each node in the cluster to the file. Specify the physical node name, the terminal concentrator (TC), System Service Processor (SSP), or Sun Fire system controller hostname, and the port number.


# vi /etc/serialports
node1 TC-hostname port
node2 TC-hostname port
node1, node2

Physical names of the cluster nodes

TC-hostname

Hostname of the TC, SSP, or Sun Fire system controller

port

Serial port number

Support for Oracle 9i

Patch T110651-04 "Sun Cluster 3.0: HA-Oracle Patch" has been declared bad and is being withdrawn from SunSolve. If you have installed this patch, you must back it out and replace it with T110651-02. See "Bug ID 4515780" for more information.

Check the SunSolve EarlyNotifier page for the Sun Cluster product, at http://sunsolve.sun.com, or contact your Sun representative to learn when a new patch becomes available.

The procedure for setting up Oracle database permissions has changed to accommodate both Oracle 8i and Oracle 9i. The changed procedure follows.

How to Set Up Oracle Database Permissions

When completing Step 2 or Step 3 of this procedure, select and configure either the Oracle authentication method or the Solaris authentication method for fault monitoring access.

  1. Determine the Oracle release you are using.

    If you are using Oracle 8i, go to Step 2.

    If you are using Oracle 9i, skip to Step 3.

  2. Enable access for the user and password to be used for fault monitoring for Oracle 8i.


    Note -

    If you are using Oracle 9i, go to Step 3.


    To complete this step, perform one of the following tasks, then skip to Step 4.

    • Oracle authentication method for Oracle 8i - For all supported Oracle releases, enter the following script into the screen that the srvmgrl command displays to enable access.


      # svrmgrl
       
      	connect internal;
      			grant connect, resource to user identified by passwd;
      			alter user user default tablespace system quota 1m on
      				system;
             			grant select on v_$sysstat to user;
      			grant create session to user;
      			grant create table to user;
      	disconnect;
      
         exit;
    • Solaris authentication method for Oracle 8i - Grant permission for the database to use Solaris authentication.


      Note -

      The user for whom you enable Solaris authentication is the user who owns the files under the $ORACLE_HOME directory. The following code sample shows that the user oracle owns these files.



      # svrmgrl
       
      	connect internal;
      			create user ops$oracle identified by externally
      				default tablespace system quota 1m on system;
      			grant connect, resource to ops$oracle;
            			grant select on v_$sysstat to ops$oracle;
      			grant create session to ops$oracle;
      			grant create table to ops$oracle;
      	disconnect;
      
         exit;
  3. Enable access for the user and password to be used for fault monitoring for Oracle 9i.


    Note -

    If you are using Oracle 8i, go to Step 2.


    • To use the Oracle authentication method for Oracle 9i - For all supported Oracle releases, enter the following script into the screen that the sqlplus command displays to enable access.


      # sqlplus "/as sysdba"
       
      			grant connect, resource to user identified by passwd;
      			alter user user default tablespace system quota 1m on
      				system;
             			grant select on v_$sysstat to user;
      			grant create session to user;
      			grant create table to user;
       
         exit;
    • To use the Solaris authentication method for Oracle 9i - Grant permission for the database to use Solaris authentication.


      Note -

      The user for which you enable Solaris authentication is the user who owns the files under the $ORACLE_HOME directory. The following code sample shows that the user oracle owns these files.



      # sqlplus "/as sysdba"
       
      			create user ops$oracle identified by externally
      				default tablespace system quota 1m on system;
      			grant connect, resource to ops$oracle;
            			grant select on v_$sysstat to ops$oracle;
      			grant create session to ops$oracle;
      			grant create table to ops$oracle;
       
         exit;
  4. Configure NET8 for the Sun Cluster software.

    The listener.ora and tnsnames.ora files must be accessible from all the nodes in the cluster. Place these files either under the cluster file system or in the local file system of each node that can potentially run the Oracle resources.


    Note -

    If you place the listener.ora and tnsnames.ora files in a location other than the /var/opt/oracle directory or the $ORACLE_HOME/network/admin directory, then you must specify TNS_ADMIN or an equivalent Oracle variable (see the Oracle documentation for details) in a user-environment file. You must also run the scrgadm(1M) command to set the resource extension parameter User_env, which will source the user-environment file.


    Sun Cluster HA for Oracle imposes no restrictions on the listener name--it can be any valid Oracle listener name.

    The following code sample identifies the lines in listener.ora that are updated.


    LISTENER =
    	(ADDRESS_LIST =
    			(ADDRESS =
    				(PROTOCOL = TCP) 
    					(HOST = logical-hostname) <- use logical hostname
    				(PORT = 1527)
    			)
    	)
    .
    .
    SID_LIST_LISTENER =
    	.
    			.
    						(SID_NAME = SID) <- Database name, default is ORCL	

    The following code sample identifies the lines in tnsnames.ora that are updated on client machines.


    service_name =
    	.
    			.
    						(ADDRESS = 
    								(PROTOCOL = TCP)
    								(HOST = logicalhostname)	<- logical hostname
    								(PORT = 1527) <- must match port in LISTENER.ORA
    						)
    				)
    				(CONNECT_DATA =
    						(SID = <SID>)) <- database name, default is ORCL

    The following example shows how to update the listener.ora and tnsnames.ora files given the following Oracle instances.

    Instance 

    Logical Host 

    Listener 

    ora8

    hadbms3

    LISTENER-ora8

    ora7

    hadbms4

    LISTENER-ora7

    The corresponding listener.ora entries are the following entries.


    LISTENER-ora7 =
    	(ADDRESS_LIST =
    			(ADDRESS =
    				(PROTOCOL = TCP)
    				(HOST = hadbms4)
    				(PORT = 1530)
    			)
    		)
    SID_LIST_LISTENER-ora7 =
    	(SID_LIST =
    			(SID_DESC =
    				(SID_NAME = ora7)
    			)
    		)
    LISTENER-ora8 =
      (ADDRESS_LIST =
        (ADDRESS= (PROTOCOL=TCP) (HOST=hadbms3)(PORT=1806))
      )
    SID_LIST_LISTENER-ora8 =
      (SID_LIST =
         (SID_DESC =
    			(SID_NAME = ora8)
    		 )	
      )

    The corresponding tnsnames.ora entries are the following entries.


    ora8 =
    (DESCRIPTION =
       (ADDRESS_LIST = 
    			(ADDRESS = (PROTOCOL = TCP) 
    			(HOST = hadbms3) 
    			(PORT = 1806))
       	)    
    	(CONNECT_DATA = (SID = ora8))
    )
    ora7 =
    (DESCRIPTION =
      (ADDRESS_LIST =
            (ADDRESS = 
    				(PROTOCOL = TCP) 
    				(HOST = hadbms4) 
    				(PORT = 1530))
      )
      	(CONNECT_DATA = (SID = ora7))
    )
  5. Verify that the Sun Cluster software is installed and running on all nodes.


    # scstat
    

SunPlex Agent Builder License Terms

SunPlex Agent Builder includes the following license terms.

Redistributables: The files in the directory /usr/cluster/lib/scdsbuilder/src are redistributable and subject to the terms and conditions of the Binary Code License Agreement and Supplemental Terms.

For more information on license terms, see the Binary Code License Agreement and Supplemental Terms that accompanies the Sun Cluster 3.0 media kit.

Restrictions and Requirements

The following restrictions and requirements have been added or updated since the Sun Cluster 3.0 U1 release.

Hardware Restrictions and Requirements

Volume Manager Restrictions and Requirements

Sun Cluster 3.0 Update 1 With Sun StorEdge 3.0 Services Software Restrictions and Requirements


Note -

Sun SNDR 3.0 and Sun StorEdge Instant Image 3.0 are not supported with the Sun Cluster 3.0 general release. They are supported only with Sun Cluster 3.0 Update 1 and compatible versions.


The Sun StorEdge 3.0 services software includes Sun StorEdge Network Data Replicator (Sun SNDR) 3.0, Sun StorEdge Instant Image 3.0, and Sun StorEdge Fast Write Cache 3.0. Following are configuration restrictions and requirements that are unique to Sun Cluster 3.0 Update 1 when you run with Sun StorEdge 3.0 services software.

More information about these configuration restrictions and requirements can be found in the Sun Cluster 3.0 U1 and Sun StorEdge Software 3.0 Integration Guide. Be sure to read the integration guide before you install and administer Sun SNDR 3.0 or Sun StorEdge Instant Image 3.0 with Sun Cluster software.

Data Service Special Requirements

Identify requirements for all data services before you begin Solaris and Sun Cluster installation. Failure to do so might result in installation errors that require that you completely reinstall the Solaris and Sun Cluster software.

For example, the Oracle Parallel Fail Safe/Real Application Clusters Guard option of Oracle Parallel Server/Real Application Clusters has special requirements for the hostnames/node names that you use in the cluster. You must accommodate these requirements before you install Sun Cluster software because you cannot change hostnames after you install Sun Cluster software. For more information on the special requirements for the hostnames/node names, see the Oracle Parallel Fail Safe/Real Application Clusters Guard documentation.

Known Problems

In addition to known problems documented in Sun Cluster 3.0 U1 Release Notes, the following known problems affect the operation of the Sun Cluster 3.0 U1 release.

Bug ID 4349995

Problem Summary: The DiskSuite Tool (metatool) graphical user interface is incompatible with Sun Cluster 3.0.

Workaround: Use command line interfaces to configure and manage shared disksets.

Bug ID 4388265

Problem Summary: You might encounter I/O errors while replacing a SCSI cable from the Sun StorEdge A3500 controller board to the disk tray. These errors are temporary and should disappear when the cable is securely in place.

Workaround: After replacing a SCSI cable in a Sun StorEdge A3500 disk array, use your volume management recovery procedure to recover from I/O errors.

Bug ID 4406523


Note -

This bug was previously reported incorrectly as Bug ID 4480277.


Problem Summary: This bug reported two problems for VERITAS Volume Manager, the first of which has been corrected by a code fix. In the second problem reported, during volume recovery a volume manager (VM) slave node that attempts to join the cluster might cause the VM master node to crash. "VM slave node" means any node that leaves the cluster as the result of a panic.

Workaround: To prevent the VM slave node from causing the VM master node to crash while the master node is synchronizing the shared volumes, perform the following two steps.

  1. Update the /etc/system file of each cluster nodes to include the following line, then reboot each node to make the change take effect.


    set halt_on_panic=1
    

  2. On all cluster nodes, change the eeprom auto-boot? flag to false.


    eeprom auto-boot?=false
    

    Some shell systems use slightly different syntax. See the man page for the eeprom(1M) command for the exact syntax to use for your shell system.

Bug ID 4410535

Problem Summary: When using the Sun Cluster module to Sun Management Center 3.0, you cannot add a previously deleted resource group.

Workaround:

  1. Click on Resource Group->Status->Failover Resource Groups.

  2. Right click on the resource group name to be deleted and select Delete Selected Resource Group.

  3. Click on the Refresh icon and make sure the row corresponding to the deleted resource group is gone.

  4. Right click on Resource Groups in the left pane and select Create New Resource Group.

  5. Give the same resource group name that was deleted before and click Next. A dialog box pops up saying that the resource group name is already in use.

Bug ID 4515780

Problem Summary: NLS files for Oracle 9.0.1 are not backward compatible for Oracle 8.1.6 and 8.1.7. Patch 110651-04 has been declared bad.

Workaround: Back out Patch 110651-04 and replace it with 110651-02.

Known Documentation Problems

This section discusses documentation errors you might encounter and steps to correct these problems. This information is in addition to known documentation problems documented in the Sun Cluster 3.0 U1 Release Notes.

Hardware Guide

Step 3 of the procedure, "How to Upgrade Controller Module Firmware in a Running Cluster," in the Sun Cluster 3.0 U1 Hardware Guide (page 164) describes erroneous syntax for the scshutdown command. The scshutdown command should not use the -i option. The correct syntax for the command should be as shown below.


# scshutdown -y -g0

Release Notes

The "Required SAP Patches for Sun Cluster HA for SAP" section of the Release Notes includes uncommon instance numbers (D03) for production. These uncommon instance numbers might cause confusion. The documentation should use common instance numbers (SC3) for production.