Sun Cluster 3.1 Data Services 10/03 Release Notes

Documentation Issues

This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.

Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters Guide

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters Guide.

Requirements for Using the Cluster File System

The section “Requirements for Using the Cluster File System” erroneously states that you can store data files on the cluster file system. You must not store data files on the cluster file system. Therefore, ignore all references to data files in this section.

Creating Node-Specific Files and Directories for Use With Oracle Parallel Server/Real Application Clusters Software on the Cluster File System

When Oracle software is installed on the cluster file system, all the files in the directory that the ORACLE_HOME environment variable specifies are accessible by all cluster nodes.

An installation might require that some Oracle files or directories maintain node-specific information. You can satisfy this requirement by using a symbolic link whose target is a file or a directory on a file system that is local to a node. Such a file system is not part of the cluster file system.

To use a symbolic link for this purpose, you must allocate an area on a local file system. To enable Oracle applications to create symbolic links to files in this area, the applications must be able to access files in this area. Because the symbolic links reside on the cluster file system, all references to the links from all nodes are the same. Therefore, all nodes must have the same namespace for the area on the local file system.

How to Create a Node-Specific Directory for Use With Oracle Parallel Server/Real Application Clusters Software on the Cluster File System

Perform this procedure for each directory that is to maintain node-specific information. The following directories are typically required to maintain node-specific information:

For information about other directories that might be required to maintain node-specific information, see your Oracle documentation.

  1. On each cluster node, create the local directory that is to maintain node-specific information.


    # mkdir -p local-dir
    
    -p

    Specifies that all nonexistent parent directories are created first

    local-dir

    Specifies the full path name of the directory that you are creating

  2. On each cluster node, make a local copy of the global directory that is to maintain node-specific information.


    # cp -pr global-dir local-dir-parent
    
    -p

    Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.

    -r

    Specifies that the directory and all its files, including any subdirectories and their files, are copied.

    global-dir

    Specifies the full path of the global directory that you are copying. This directory resides on the cluster file system under the directory that the ORACLE_HOME environment variable specifies.

    local-dir-parent

    Specifies the directory on the local node that is to contain the local copy. This directory is the parent directory of the directory that you created in Step 1.

  3. Replace the global directory that you copied in Step 2 with a symbolic link to the local copy of the global directory.

    1. From any cluster node, remove the global directory that you copied in Step 2.


      # rm -r global-dir
      
      -r

      Specifies that the directory and all its files, including any subdirectories and their files, are removed.

      global-dir

      Specifies the file name and full path of the global directory that you are removing. This directory is the global directory that you copied in Step 2.

    2. From any cluster node, create a symbolic link from the local copy of the directory to the global directory that you removed in Step a.


      # ln -s local-dir global-dir
      
      -s

      Specifies that the link is a symbolic link

      local-dir

      Specifies that the local directory that you created in Step 1 is the source of the link

      global-dir

      Specifies that the global directory that you removed in Step a is the target of the link


Example 1–1 Creating Node-Specific Directories

This example shows the sequence of operations that is required to create node-specific directories on a two-node cluster. This cluster is configured as follows:

The following operations are performed on each node:

  1. To create the required directories on the local file system, the following commands are run:


    # mkdir -p /local/oracle/network/agent

    # mkdir -p /local/oracle/network/log

    # mkdir -p /local/oracle/network/trace

    # mkdir -p /local/oracle/srvm/log

    # mkdir -p /local/oracle/apache
  2. To make local copies of the global directories that are to maintain node-specific information, the following commands are run:


    # cp -pr $ORACLE_HOME/network/agent /local/oracle/network/.

    # cp -pr $ORACLE_HOME/network/log /local/oracle/network/.

    # cp -pr $ORACLE_HOME/network/trace /local/oracle/network/.

    # cp -pr $ORACLE_HOME/srvm/log /local/oracle/srvm/.

    # cp -pr $ORACLE_HOME/apache /local/oracle/.

The following operations are performed on only one node:

  1. To remove the global directories, the following commands are run:


    # rm -r $ORACLE_HOME/network/agent

    # rm -r $ORACLE_HOME/network/log

    # rm -r $ORACLE_HOME/network/trace

    # rm -r $ORACLE_HOME/srvm/log

    # rm -r $ORACLE_HOME/apache
  2. To create symbolic links from the local directories to their corresponding global directories, the following commands are run:


    # ln -s /local/oracle/network/agent $ORACLE_HOME/network/agent 

    # ln -s /local/oracle/network/log $ORACLE_HOME/network/log

    # ln -s /local/oracle/network/trace $ORACLE_HOME/network/trace

    # ln -s /local/oracle/srvm/log $ORACLE_HOME/srvm/log

    # ln -s /local/oracle/apache $ORACLE_HOME/apache

How to Create a Node-Specific File for Use With Oracle Parallel Server/Real Application Clusters Software on the Cluster File System

Perform this procedure for each file that is to maintain node-specific information. The following files are typically required to maintain node-specific information:

For information about other files that might be required to maintain node-specific information, see your Oracle documentation.

  1. On each cluster node, create the local directory that will contain the file that is to maintain node-specific information.


    # mkdir -p local-dir
    
    -p

    Specifies that all nonexistent parent directories are created first

    local-dir

    Specifies the full path name of the directory that you are creating

  2. On each cluster node, make a local copy of the global file that is to maintain node-specific information.


    # cp -p global-file local-dir
    
    -p

    Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.

    global-file

    Specifies the file name and full path of the global file that you are copying. This file was installed on the cluster file system under the directory that the ORACLE_HOME environment variable specifies.

    local-dir

    Specifies the directory that is to contain the local copy of the file. This directory is the directory that you created in Step 1.

  3. Replace the global file that you copied in Step 2 with a symbolic link to the local copy of the file.

    1. From any cluster node, remove the global file that you copied in Step 2.


      # rm global-file
      
      global-file

      Specifies the file name and full path of the global file that you are removing. This file is the global file that you copied in Step 2.

    2. From any cluster node, create a symbolic link from the local copy of the file to the directory from which you removed the global file in Step a.


      # ln -s local-file global-dir
      
      -s

      Specifies that the link is a symbolic link

      local-file

      Specifies that the file that you copied in Step 2 is the source of the link

      global-dir

      Specifies that the directory from which you removed the global version of the file in Step a is the target of the link


Example 1–2 Creating Node-Specific Files

This example shows the sequence of operations that is required to create node-specific files on a two-node cluster. This cluster is configured as follows:

The following operations are performed on each node:

  1. To create the local directory that will contain the files that are to maintain node-specific information, the following command is run:


    # mkdir -p /local/oracle/network/admin
  2. To make a local copy of the global files that are to maintain node-specific information, the following commands are run:


    # cp -p $ORACLE_HOME/network/admin/snmp_ro.ora \
      /local/oracle/network/admin/.

    # cp -p $ORACLE_HOME/network/admin/snmp_rw.ora \
      /local/oracle/network/admin/.

The following operations are performed on only one node:

  1. To remove the global files, the following commands are run:


    # rm $ORACLE_HOME/network/admin/snmp_ro.ora

    # rm $ORACLE_HOME/network/admin/snmp_rw.ora
  2. To create symbolic links from the local copies of the files to their corresponding global files, the following commands are run:


    # ln -s /local/oracle/network/admin/snmp_ro.ora \
      $ORACLE_HOME/network/admin/snmp_rw.ora

    # ln -s /local/oracle/network/admin/snmp_rw.ora \
      $ORACLE_HOME/network/admin/snmp_rw.ora

Sun Cluster 3.1 Data Service for Oracle E-Business Suite Guide

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle E-Business Suite Guide.

How to Register and Configure Sun Cluster HA for Oracle E-Business Suite as a Failover Service

Step 13 of the procedure “How to Register and Configure Sun Cluster HA for Oracle E-Business Suite as a Failover Service” is incorrect. The correct text is as follows:

The example that follows this step is also incorrect. The correct example is as follows:

RS=ebs-cmg-res
RG=ebs-rg
HAS_RS=ebs-has-res
LSR_RS=ebs-cmglsr-res
CON_HOST=lhost1
CON_COMNTOP=/global/mnt10/d01/oracle/prodcomn
CON_APPSUSER=ebs
APP_SID=PROD
APPS_PASSWD=apps
ORACLE_HOME=/global/mnt10/d01/oracle/prodora/8.0.6
CON_LIMIT=70
MODE=32/Y

Sun Cluster 3.1 Data Service 10/03 for Sun ONE Directory Server and Sun ONE Web Server

This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Sun ONE Directory Server Guide and Sun Cluster 3.1 Data Service for Sun ONE Web Server Guide.

Name Change for iPlanet Web Server and for iPlanet Directory Server

The names for iPlanet Web Server and iPlanet Directory Server have been changed. The new names are Sun ONE Web Server and Sun ONE Directory Server. The data service names are now Sun Cluster HA for Sun ONE Web Server and Sun Cluster HA for Sun ONE Directory Server.

The application name on the Sun Cluster Agents CD-ROM might still be iPlanet Web Server and iPlanet Directory Server.

Sun Cluster 3.1 Data Service 10/03 for SAP liveCache

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for SAP liveCache.

The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should state that the SAP xserver can only be configured as a scalable resource. Configuring the SAP xserver as a failover resource will cause the SAP liveCache resource not failover. Ignore all references to configuring the SAP xserver resource as a failover resource inSun Cluster 3.1 Data Service for SAP liveCache.

The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should also contain an extra step. After step 10, “Enable the scalable resource group that now includes the SAP xserver resource,” you must register the liveCache resource by entering the following text.


# scrgadm -a -j livecache-resource -g livecache-resource-group \
-t SUNW.sap_livecache -x livecache_name=LC-NAME \
-y resource_dependencies=livecache-storage-resource

After you register the liveCache resource, proceed to the next step, “Set up a resource group dependency between SAP xserver and liveCache.”

Sun Cluster 3.1 Data Service 10/03 for WebLogic Server

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for WebLogic Server.

The “Protection of BEA WebLogic Server Component” table should state that the BEA WebLogic Server database is protected by all databases supported by BEA WebLogic Server and supported on Sun Cluster. The table should also state that the HTTP servers are protected by all HTTP servers supported by BEA WebLogic Server and supported on Sun Cluster.

Sun Cluster 3.1 Data Service 10/03 for Apache

This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for Apache Guide.

The “Planning the Installation and Configuration” section should not state a note about using scalable proxy serving a scalable web resource. Use of scalable proxy is not supported.

If you use the Monitor_Uri_List extension property for the Sun Cluster HA for Apache data service, the required value of the Type_version property is 4. You can perform a Resource Type upgrade to Type_version 4.

Sun Cluster 3.1 Data Service 10/03 for Sun ONE Web Server

If you use the Monitor_Uri_List extension property for the Sun Cluster HA for Sun ONE Web Server data service, the required value of the Type_version property is 4. You can perform a Resource Type upgrade to Type_version 4.

Man Pages

SUNW.wls(5)

There is an error in the See Also section of this man page. Instead of referencing the Sun Cluster 3.1 Data Services Installation and Configuration Guide, you should reference the Sun Cluster 3.1 Data Service for WebLogic Server Guide.