Sun Cluster 3.0 U1 Release Notes Supplement

New Features

In addition to features documented in Sun Cluster 3.0 U1 Release Notes, this release now includes support for the following features.

Support for Sun SNDR 3.0 and Sun StorEdge Instant Image 3.0

Sun Cluster 3.0 Update 1 now supports Sun StorEdge Network Data Replicator (Sun SNDR) 3.0 and Sun StorEdge Instant Image 3.0 as cluster-aware products. These software products are part of the Sun StorEdge Version 3.0 software package. Sun StorEdge Fast Write Cache, which is also a part of the Sun StorEdge Version 3.0 software package, is not supported in any Sun Cluster environment.

The Sun SNDR software is a data replication application that provides access to data as part of business continuance and disaster recovery plans. The Sun StorEdge Instant Image software is a point-in-time copy application that enables you to create copies of application or test data. These software products are now cluster aware and will fail over and switch back in a Sun Cluster environment. For more information, see the following documents, available online at http://docs.sun.com.

There are some restrictions that apply to Sun Cluster 3.0 Update 1 only when using Sun SNDR or Sun StorEdge Instant Image. See "Sun Cluster 3.0 Update 1 With Sun StorEdge 3.0 Services Software Restrictions and Requirements" for more information.

Sun SNDR 3.0 and Sun StorEdge Instant Image 3.0 Support Patches

The following patches are required to support Sun SNDR 3.0 and Sun StorEdge Instant Image 3.0 with Sun Cluster 3.0 Update 1. Patches are available from SunSolve at http://sunsolve.sun.com/.

111945-xx

Storage Cache Manager

111946-xx

Storage Volume Driver

111947-xx

Sun StorEdge Instant Image

111948-xx

Sun SNDR

Support for iPlanet Web Server 6.0

Sun Cluster 3.0 Update 1 now supports iPlanet Web Server 6.0.

Two procedures have changed for iPlanet Web Server 6.0.

Installing Certificates on Secure Instances of iPlanet Web Server 6.0

The procedure for installing certificates on secure instances of iPlanet Web Server has changed for version 6.0. If you plan to run secure instances of iPlanet Web Server 6.0, complete the following steps when you install security certificates. This installation procedure requires that you create a certificate on one node, and then create symbolic links to that certificate on the other cluster nodes.

  1. Run the administrative server on node1.

  2. From your Web browser, connect to the administrative server as http://node1.domain:port.

    For example, http://phys-schost-1.eng.sun.com:8888. Use whatever port number you specified as the administrative server port during installation. The default port number is 8888.

  3. Install the certificate on node1.

    This installation creates three certificate files. One file, secmod.db, is common to all nodes, and the other two are specific to node1. These files are located in the alias subdirectory, under the directory in which the iPlanet Web Server files are installed.

  4. If you installed iPlanet Web Server on a global file system, complete the following tasks. If you installed iPlanet Web Server on a local file system, go to Step 5.

    1. Note the location and file names for the three files created when installing the certificate in Step 3.

      For example, if you installed iPlanet Web Server in /global/iws/servers, and you used the IP address "IPx" when installing the certificate, then the paths to the files on node1 would be

      /global/iws/servers/alias/secmod.db

      /global/iws/servers/alias/https-IPx-node1-cert7.db

      /global/iws/servers/alias/https-IPx-node1-key3.db

    2. Create symbolic links for all other cluster nodes to the node-specific files for node1.

      In the following example, substitute the appropriate file paths for your system.


      # ln -s /global/iws/servers/alias/https-IPx-node1-cert7.db
              /global/iws/servers/alias/https-IPx-node2-cert7.db 
      # ln -s /global/iws/servers/alias/https-IPx-node1-key3.db
              /global/iws/servers/alias/https-IPx-node2-key3.db 
      

  5. If you installed iPlanet Web Server on a local file system, complete the following tasks.

    1. Note the location and file names for the three files created on node1 when installing the certificate in Step 3.

      For example, if you installed iPlanet Web Server in /local/iws/servers, and you used the IP address "IPx" when installing the certificate, then the paths to the files on node1 would be

      /local/iws/servers/alias/secmod.db

      /local/iws/servers/alias/https-IPx-node1-cert7.db

      /local/iws/servers/alias/https-IPx-node1-key3.db

    2. Move the three certificate files to a location on the global file system.

      In the following example, substitute the appropriate file paths for your system


      # mv /local/iws/servers/alias/secmod.db
           /global/secure/secmod.db
      # mv /local/iws/servers/alias/https-IPx-node1-cert7.db 
           /global/secure/https-IPx-node1-cert7.db
      # mv /local/iws/servers/alias/https-IPx-node1-key3.db 
           /global/secure/https-IPx-node1-key3.db
      

    3. Create symbolic links between the local and global paths of the three certificate files.

      Create the symbolic links on each node in the cluster.

      In the following example, substitute the appropriate file paths for your system.


      # Symbolic links for node1
      # ln -s /global/secure/secmod.db
              /local/iws/servers/alias/secmod.db 
      # ln -s /global/secure/https-IPx-node1-cert7.db
              /local/iws/servers/alias/https-IPx-node1-cert7.db
      # ln -s /global/secure/https-IPx-node1-key3.db
              /local/iws/servers/alias/https-IPx-node1-key3.db 
      
      # Symbolic links for node2
      # ln -s /global/secure/secmod.db
              /local/iws/servers/alias/secmod.db 
      # ln -s /global/secure/https-IPx-node1-cert7.db
              /local/iws/servers/alias/https-IPx-node2-cert7.db 
      # ln -s /global/secure/https-IPx-node1-key3.db
              /local/iws/servers/alias/https-IPx-node2-key3.db 
      

Specifying the Location of the Access Logs

The procedure for specifying the location of the access logs while configuring an iPlanet Web Server has changed for iPlanet Web Server 6.0. To specify the location of the access logs while configuring an iPlanet Web Server, complete the following steps.

This change replaces Step 6 through Step 8 in the procedure "How to Configure an iPlanet Web Server" in Chapter 3, "Installing and Configuring Sun Cluster HA for iPlanet Web Sever," in the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide.

  1. Edit the ErrorLog, PidLog, and access log entries in the magnus.conf file to reflect the directory created in Step 5 of the "How to Configure an iPlanet Web Server" procedure in Chapter 3 of the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide, and synchronize the changes from the administrator's interface.

    The magnus.conf file specifies the locations for the error, access, and PID files. Edit this file to change the error, access, and PID file locations to the directory that you created in Step 5 of the "How to Configure an iPlanet Web Server" procedure in Chapter 3 of the Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide. The magnus.conf file is located in the config directory of the iPlanet server instance. If the instance directory is located on the local file system, you must modify the magnus.conf file on each of the nodes.

    Change the following entries:


    ErrorLog /global/data/netscape/https-schost-1/logs/error
    PidLog /global/data/netscape/https-schost-1/logs/pid
    ...
    Init fn=flex-init access="$accesslog" ...
    

    to


    ErrorLog /var/pathname/http-instance/logs/error
    PidLog /var/pathname/http-instance/logs/pid
    ...
    Init fn=flex-init access="/var/pathname/http-instance/logs/access" ...
    

    As soon as the administrator's interface detects your changes, the interface displays a warning message, as follows.


    Warning: Manual edits not loaded
    Some configuration files have been edited by hand. Use the "Apply"
    button on the upper right side of the screen to load the latest
     configuration files.
  2. Click Apply as prompted.

    The administrator's interface displays a new web page.

  3. Click Load Configuration Files.

Cluster Control Panel Support for Sun Fire Servers

If you intend to use Cluster Control Panel software tools, such as cconsole, to connect to a Sun Fire system, use the following instruction to create the /etc/serialports file on your administrative console. This instruction replaces Step 8 of the procedure "How to Install Cluster Control Panel Software on the Administrative Console" in the Sun Cluster 3.0 U1 Installation Guide.

8. Create an /etc/serialports file.

Add an entry for each node in the cluster to the file. Specify the physical node name, the terminal concentrator (TC), System Service Processor (SSP), or Sun Fire system controller hostname, and the port number.


# vi /etc/serialports
node1 TC-hostname port
node2 TC-hostname port
node1, node2

Physical names of the cluster nodes

TC-hostname

Hostname of the TC, SSP, or Sun Fire system controller

port

Serial port number

Support for Oracle 9i

Patch T110651-04 "Sun Cluster 3.0: HA-Oracle Patch" has been declared bad and is being withdrawn from SunSolve. If you have installed this patch, you must back it out and replace it with T110651-02. See "Bug ID 4515780" for more information.

Check the SunSolve EarlyNotifier page for the Sun Cluster product, at http://sunsolve.sun.com, or contact your Sun representative to learn when a new patch becomes available.

The procedure for setting up Oracle database permissions has changed to accommodate both Oracle 8i and Oracle 9i. The changed procedure follows.

How to Set Up Oracle Database Permissions

When completing Step 2 or Step 3 of this procedure, select and configure either the Oracle authentication method or the Solaris authentication method for fault monitoring access.

  1. Determine the Oracle release you are using.

    If you are using Oracle 8i, go to Step 2.

    If you are using Oracle 9i, skip to Step 3.

  2. Enable access for the user and password to be used for fault monitoring for Oracle 8i.


    Note -

    If you are using Oracle 9i, go to Step 3.


    To complete this step, perform one of the following tasks, then skip to Step 4.

    • Oracle authentication method for Oracle 8i - For all supported Oracle releases, enter the following script into the screen that the srvmgrl command displays to enable access.


      # svrmgrl
       
      	connect internal;
      			grant connect, resource to user identified by passwd;
      			alter user user default tablespace system quota 1m on
      				system;
             			grant select on v_$sysstat to user;
      			grant create session to user;
      			grant create table to user;
      	disconnect;
      
         exit;
    • Solaris authentication method for Oracle 8i - Grant permission for the database to use Solaris authentication.


      Note -

      The user for whom you enable Solaris authentication is the user who owns the files under the $ORACLE_HOME directory. The following code sample shows that the user oracle owns these files.



      # svrmgrl
       
      	connect internal;
      			create user ops$oracle identified by externally
      				default tablespace system quota 1m on system;
      			grant connect, resource to ops$oracle;
            			grant select on v_$sysstat to ops$oracle;
      			grant create session to ops$oracle;
      			grant create table to ops$oracle;
      	disconnect;
      
         exit;
  3. Enable access for the user and password to be used for fault monitoring for Oracle 9i.


    Note -

    If you are using Oracle 8i, go to Step 2.


    • To use the Oracle authentication method for Oracle 9i - For all supported Oracle releases, enter the following script into the screen that the sqlplus command displays to enable access.


      # sqlplus "/as sysdba"
       
      			grant connect, resource to user identified by passwd;
      			alter user user default tablespace system quota 1m on
      				system;
             			grant select on v_$sysstat to user;
      			grant create session to user;
      			grant create table to user;
       
         exit;
    • To use the Solaris authentication method for Oracle 9i - Grant permission for the database to use Solaris authentication.


      Note -

      The user for which you enable Solaris authentication is the user who owns the files under the $ORACLE_HOME directory. The following code sample shows that the user oracle owns these files.



      # sqlplus "/as sysdba"
       
      			create user ops$oracle identified by externally
      				default tablespace system quota 1m on system;
      			grant connect, resource to ops$oracle;
            			grant select on v_$sysstat to ops$oracle;
      			grant create session to ops$oracle;
      			grant create table to ops$oracle;
       
         exit;
  4. Configure NET8 for the Sun Cluster software.

    The listener.ora and tnsnames.ora files must be accessible from all the nodes in the cluster. Place these files either under the cluster file system or in the local file system of each node that can potentially run the Oracle resources.


    Note -

    If you place the listener.ora and tnsnames.ora files in a location other than the /var/opt/oracle directory or the $ORACLE_HOME/network/admin directory, then you must specify TNS_ADMIN or an equivalent Oracle variable (see the Oracle documentation for details) in a user-environment file. You must also run the scrgadm(1M) command to set the resource extension parameter User_env, which will source the user-environment file.


    Sun Cluster HA for Oracle imposes no restrictions on the listener name--it can be any valid Oracle listener name.

    The following code sample identifies the lines in listener.ora that are updated.


    LISTENER =
    	(ADDRESS_LIST =
    			(ADDRESS =
    				(PROTOCOL = TCP) 
    					(HOST = logical-hostname) <- use logical hostname
    				(PORT = 1527)
    			)
    	)
    .
    .
    SID_LIST_LISTENER =
    	.
    			.
    						(SID_NAME = SID) <- Database name, default is ORCL	

    The following code sample identifies the lines in tnsnames.ora that are updated on client machines.


    service_name =
    	.
    			.
    						(ADDRESS = 
    								(PROTOCOL = TCP)
    								(HOST = logicalhostname)	<- logical hostname
    								(PORT = 1527) <- must match port in LISTENER.ORA
    						)
    				)
    				(CONNECT_DATA =
    						(SID = <SID>)) <- database name, default is ORCL

    The following example shows how to update the listener.ora and tnsnames.ora files given the following Oracle instances.

    Instance 

    Logical Host 

    Listener 

    ora8

    hadbms3

    LISTENER-ora8

    ora7

    hadbms4

    LISTENER-ora7

    The corresponding listener.ora entries are the following entries.


    LISTENER-ora7 =
    	(ADDRESS_LIST =
    			(ADDRESS =
    				(PROTOCOL = TCP)
    				(HOST = hadbms4)
    				(PORT = 1530)
    			)
    		)
    SID_LIST_LISTENER-ora7 =
    	(SID_LIST =
    			(SID_DESC =
    				(SID_NAME = ora7)
    			)
    		)
    LISTENER-ora8 =
      (ADDRESS_LIST =
        (ADDRESS= (PROTOCOL=TCP) (HOST=hadbms3)(PORT=1806))
      )
    SID_LIST_LISTENER-ora8 =
      (SID_LIST =
         (SID_DESC =
    			(SID_NAME = ora8)
    		 )	
      )

    The corresponding tnsnames.ora entries are the following entries.


    ora8 =
    (DESCRIPTION =
       (ADDRESS_LIST = 
    			(ADDRESS = (PROTOCOL = TCP) 
    			(HOST = hadbms3) 
    			(PORT = 1806))
       	)    
    	(CONNECT_DATA = (SID = ora8))
    )
    ora7 =
    (DESCRIPTION =
      (ADDRESS_LIST =
            (ADDRESS = 
    				(PROTOCOL = TCP) 
    				(HOST = hadbms4) 
    				(PORT = 1530))
      )
      	(CONNECT_DATA = (SID = ora7))
    )
  5. Verify that the Sun Cluster software is installed and running on all nodes.


    # scstat