This section discusses known errors or omissions for documentation, online help, or man pages and steps to correct these problems.
This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters Guide.
The section “Requirements for Using the Cluster File System” erroneously states that you can store data files on the cluster file system. You must not store data files on the cluster file system. Therefore, ignore all references to data files in this section.
When Oracle software is installed on the cluster file system, all the
files in the directory that the ORACLE_HOME
environment variable specifies are accessible by all cluster
nodes.
An installation might require that some Oracle files or directories maintain node-specific information. You can satisfy this requirement by using a symbolic link whose target is a file or a directory on a file system that is local to a node. Such a file system is not part of the cluster file system.
To use a symbolic link for this purpose, you must allocate an area on a local file system. To enable Oracle applications to create symbolic links to files in this area, the applications must be able to access files in this area. Because the symbolic links reside on the cluster file system, all references to the links from all nodes are the same. Therefore, all nodes must have the same namespace for the area on the local file system.
Perform this procedure for each directory that is to maintain node-specific information. The following directories are typically required to maintain node-specific information:
$ORACLE_HOME
/network/agent
$ORACLE_HOME
/network/log
$ORACLE_HOME
/network/trace
$ORACLE_HOME
/srvm/log
$ORACLE_HOME
/apache
For information about other directories that might be required to maintain node-specific information, see your Oracle documentation.
On each cluster node, create the local directory that is to maintain node-specific information.
# mkdir -p local-dir |
Specifies that all nonexistent parent directories are created first
Specifies the full path name of the directory that you are creating
On each cluster node, make a local copy of the global directory that is to maintain node-specific information.
# cp -pr global-dir local-dir-parent |
Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.
Specifies that the directory and all its files, including any subdirectories and their files, are copied.
Specifies the full path of the global directory that you are copying.
This directory resides on the cluster file system under the directory that
the ORACLE_HOME
environment variable
specifies.
Specifies the directory on the local node that is to contain the local copy. This directory is the parent directory of the directory that you created in Step 1.
Replace the global directory that you copied in Step 2 with a symbolic link to the local copy of the global directory.
From any cluster node, remove the global directory that you copied in Step 2.
# rm -r global-dir |
Specifies that the directory and all its files, including any subdirectories and their files, are removed.
Specifies the file name and full path of the global directory that you are removing. This directory is the global directory that you copied in Step 2.
From any cluster node, create a symbolic link from the local copy of the directory to the global directory that you removed in Step a.
# ln -s local-dir global-dir |
This example shows the sequence of operations that is required to create node-specific directories on a two-node cluster. This cluster is configured as follows:
The ORACLE_HOME
environment variable specifies the /global/oracle directory.
The local file system on each node is located under the /local directory.
The following operations are performed on each node:
To create the required directories on the local file system, the following commands are run:
# mkdir -p /local/oracle/network/agent |
# mkdir -p /local/oracle/network/log |
# mkdir -p /local/oracle/network/trace |
# mkdir -p /local/oracle/srvm/log |
# mkdir -p /local/oracle/apache |
To make local copies of the global directories that are to maintain node-specific information, the following commands are run:
# cp -pr $ORACLE_HOME/network/agent /local/oracle/network/. |
# cp -pr $ORACLE_HOME/network/log /local/oracle/network/. |
# cp -pr $ORACLE_HOME/network/trace /local/oracle/network/. |
# cp -pr $ORACLE_HOME/srvm/log /local/oracle/srvm/. |
# cp -pr $ORACLE_HOME/apache /local/oracle/. |
The following operations are performed on only one node:
To remove the global directories, the following commands are run:
# rm -r $ORACLE_HOME/network/agent |
# rm -r $ORACLE_HOME/network/log |
# rm -r $ORACLE_HOME/network/trace |
# rm -r $ORACLE_HOME/srvm/log |
# rm -r $ORACLE_HOME/apache |
To create symbolic links from the local directories to their corresponding global directories, the following commands are run:
# ln -s /local/oracle/network/agent $ORACLE_HOME/network/agent |
# ln -s /local/oracle/network/log $ORACLE_HOME/network/log |
# ln -s /local/oracle/network/trace $ORACLE_HOME/network/trace |
# ln -s /local/oracle/srvm/log $ORACLE_HOME/srvm/log |
# ln -s /local/oracle/apache $ORACLE_HOME/apache |
Perform this procedure for each file that is to maintain node-specific information. The following files are typically required to maintain node-specific information:
$ORACLE_HOME
/network/admin/snmp_ro.ora
$ORACLE_HOME
/network/admin/snmp_rw.ora
For information about other files that might be required to maintain node-specific information, see your Oracle documentation.
On each cluster node, create the local directory that will contain the file that is to maintain node-specific information.
# mkdir -p local-dir |
Specifies that all nonexistent parent directories are created first
Specifies the full path name of the directory that you are creating
On each cluster node, make a local copy of the global file that is to maintain node-specific information.
# cp -p global-file local-dir |
Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.
Specifies the file name and full path of the global file that you are
copying. This file was installed on the cluster file system under the directory
that the ORACLE_HOME
environment
variable specifies.
Specifies the directory that is to contain the local copy of the file. This directory is the directory that you created in Step 1.
Replace the global file that you copied in Step 2 with a symbolic link to the local copy of the file.
From any cluster node, remove the global file that you copied in Step 2.
# rm global-file |
Specifies the file name and full path of the global file that you are removing. This file is the global file that you copied in Step 2.
From any cluster node, create a symbolic link from the local copy of the file to the directory from which you removed the global file in Step a.
# ln -s local-file global-dir |
This example shows the sequence of operations that is required to create node-specific files on a two-node cluster. This cluster is configured as follows:
The ORACLE_HOME
environment variable specifies the /global/oracle directory.
The local file system on each node is located under the /local directory.
The following operations are performed on each node:
To create the local directory that will contain the files that are to maintain node-specific information, the following command is run:
# mkdir -p /local/oracle/network/admin |
To make a local copy of the global files that are to maintain node-specific information, the following commands are run:
# cp -p $ORACLE_HOME/network/admin/snmp_ro.ora \ /local/oracle/network/admin/. |
# cp -p $ORACLE_HOME/network/admin/snmp_rw.ora \ /local/oracle/network/admin/. |
The following operations are performed on only one node:
To remove the global files, the following commands are run:
# rm $ORACLE_HOME/network/admin/snmp_ro.ora |
# rm $ORACLE_HOME/network/admin/snmp_rw.ora |
To create symbolic links from the local copies of the files to their corresponding global files, the following commands are run:
# ln -s /local/oracle/network/admin/snmp_ro.ora \ $ORACLE_HOME/network/admin/snmp_rw.ora |
# ln -s /local/oracle/network/admin/snmp_rw.ora \ $ORACLE_HOME/network/admin/snmp_rw.ora |
This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Oracle E-Business Suite Guide.
Step 13 of the procedure “How to Register and Configure Sun Cluster HA for Oracle E-Business Suite as a Failover Service” is incorrect. The correct text is as follows:
13. Create a resource for the Oracle E-Business Suite Concurrent Manager Server.
# grep PROD.CON_COMNTOP /var/tmp/config.txt PROD.CON_COMNTOP=/global/mnt10/d01/oracle/prodcomn <- CON_COMNTOP # # grep PROD.DBS_ORA806= /var/tmp/config.txt PROD.DBS_ORA806=/global/mnt10/d01/oracle/prodora/8.0.6 <- ORACLE_HOME |
The example that follows this step is also incorrect. The correct example is as follows:
RS=ebs-cmg-res RG=ebs-rg HAS_RS=ebs-has-res LSR_RS=ebs-cmglsr-res CON_HOST=lhost1 CON_COMNTOP=/global/mnt10/d01/oracle/prodcomn CON_APPSUSER=ebs APP_SID=PROD APPS_PASSWD=apps ORACLE_HOME=/global/mnt10/d01/oracle/prodora/8.0.6 CON_LIMIT=70 MODE=32/Y
This section discusses errors and omissions from Sun Cluster 3.1 Data Service for Sun ONE Directory Server Guide and Sun Cluster 3.1 Data Service for Sun ONE Web Server Guide.
The names for iPlanet Web Server and iPlanet Directory Server have been changed. The new names are Sun ONE Web Server and Sun ONE Directory Server. The data service names are now Sun Cluster HA for Sun ONE Web Server and Sun Cluster HA for Sun ONE Directory Server.
The application name on the Sun Cluster Agents CD-ROM might still be iPlanet Web Server and iPlanet Directory Server.
This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for SAP liveCache.
The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should state that the SAP xserver can only be configured as a scalable resource. Configuring the SAP xserver as a failover resource will cause the SAP liveCache resource not failover. Ignore all references to configuring the SAP xserver resource as a failover resource inSun Cluster 3.1 Data Service for SAP liveCache.
The “Registering and Configuring Sun Cluster HA for SAP liveCache” section should also contain an extra step. After step 10, “Enable the scalable resource group that now includes the SAP xserver resource,” you must register the liveCache resource by entering the following text.
# scrgadm -a -j livecache-resource -g livecache-resource-group \ -t SUNW.sap_livecache -x livecache_name=LC-NAME \ -y resource_dependencies=livecache-storage-resource |
After you register the liveCache resource, proceed to the next step, “Set up a resource group dependency between SAP xserver and liveCache.”
This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for WebLogic Server.
The “Protection of BEA WebLogic Server Component” table should state that the BEA WebLogic Server database is protected by all databases supported by BEA WebLogic Server and supported on Sun Cluster. The table should also state that the HTTP servers are protected by all HTTP servers supported by BEA WebLogic Server and supported on Sun Cluster.
This section discusses errors and omissions from the Sun Cluster 3.1 Data Service for Apache Guide.
The “Planning the Installation and Configuration” section should not state a note about using scalable proxy serving a scalable web resource. Use of scalable proxy is not supported.
If you use the Monitor_Uri_List extension property for the Sun Cluster HA for Apache data service, the required value of the Type_version property is 4. You can perform a Resource Type upgrade to Type_version 4.
If you use the Monitor_Uri_List extension property for the Sun Cluster HA for Sun ONE Web Server data service, the required value of the Type_version property is 4. You can perform a Resource Type upgrade to Type_version 4.
There is an error in the See Also section of this man page. Instead of referencing the Sun Cluster 3.1 Data Services Installation and Configuration Guide, you should reference the Sun Cluster 3.1 Data Service for WebLogic Server Guide.