Skip Headers
Oracle® Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter
11g Release 1 (11.1.1)

Part Number E12037-05
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

11 Managing the Topology

This chapter describes some operations that you can perform after you have set up the topology. These operations include monitoring, scaling, and backing up your topology.

This chapter contains the following sections:

11.1 Monitoring the Topology

For information on monitoring the topology, see chapter 15 of the Oracle Fusion Middleware Administrator's Guide for Oracle WebCenter.

11.2 Configuring UMS Drivers

UMS driver configuration is not automatically propagated in a SOA cluster. This implies that users need to:

  1. Apply the configuration of UMS drivers in each and every one of the servers in the Enterprise Deployment topology that is using the driver.

  2. When server migration is used, servers are moved to a different node's domain directory. It is necessary to pre-create the UMS driver configuration in the failover node. The UMS driver configuration file location is:

    ORACLE_BASE/admin/<domain_name>/mserver/<domain_name>/servers/<server_name>/tmp/_WL_user/<ums_driver_name>/*/configuration/driverconfig.xml
    

    (where '*' represents a directory whose name is randomly generated by WLS during deployment, for example, "3682yq").

In order to create the file in preparation for possible failovers, users can force a server migration and copy the file from the source node.

It is required to restart the driver for these changes to take effect (that is, for the driver to consume the modified configuration). To restart the driver:

  1. Log on to the Oracle WebLogic Administration console.

  2. Expand the environment node on the navigation tree.

  3. Click on Deployments.

  4. Select the driver.

  5. Click Stop->When work completes and confirm the operation.

  6. Wait for the driver to transition to the "Prepared" state (refresh the administration console page, if required).

  7. Select the driver again, and click Start->Servicing all requests and confirm the operation.

Make sure that you verify in Oracle Enterprise Manager Fusion Middleware Control that the properties for the driver have been preserved.

11.3 Managing Space in the SOA Infrastructure Database

Although not all composites may use the database frequently, the service engines generate a considerable amount of data in the CUBE_INSTANCE and MEDIATOR_INSTANCE schemas. Lack of space in the database may prevent SOA composites from functioning. Watch for generic errors, such as “oracle.fabric.common.FabricInvocationException” in the Oracle Enterprise Manager Fusion Middleware Control console (dashboard for instances). Search also in the SOA server's logs for errors, such as:

Error Code: 1691
...
ORA-01691: unable to extend lob segment
SOAINFRA.SYS_LOB0000108469C00017$$ by 128 in tablespace SOAINFRA

These messages are typically indicators of space issues in the database that may likely require adding more data files or more space to the existing files. The SOA Database Administrator should determine the extension policy and parameters to be used when adding space. Additionally, old composite instances can be purged to reduce the SOA Infrastructure database's size. Oracle does not recommend using the Oracle Enterprise Manager Fusion Middleware Control for this type of operation as in most cases the operations cause a transaction time out. There are specific packages provided with the Repository Creation Utility to purge instances. For example:

DECLARE 
  FILTER INSTANCE_FILTER := INSTANCE_FILTER(); 
 
   MAX_INSTANCES NUMBER; 
  DELETED_INSTANCES NUMBER; 
  PURGE_PARTITIONED_DATA BOOLEAN := TRUE; 
 BEGIN 
   . 
  FILTER.COMPOSITE_PARTITION_NAME:='default'; 
  FILTER.COMPOSITE_NAME := 'FlatStructure'; 
  FILTER.COMPOSITE_REVISION := '10.0';   
  FILTER.STATE := fabric. STATE_UNKNOWN; 
  FILTER.MIN_CREATED_DATE := to_timestamp('2010-09-07','YYYY-MM-DD'); 
  FILTER.MAX_CREATED_DATE := to_timestamp('2010-09-08','YYYY-MM-DD'); 
  MAX_INSTANCES := 1000; 
 . 
  DELETED_INSTANCES := FABRIC.DELETE_COMPOSITE_INSTANCES( 
    FILTER => FILTER, 
    MAX_INSTANCES => MAX_INSTANCES, 
    PURGE_PARTITIONED_DATA => PURGE_PARTITIONED_DATA 
  );

This deletes the first 1000 instances of the FlatStructure composite (version 10) created between '2010-09-07' and '2010-09-08' that are in “UNKNOWN” state. Refer to Chapter 8, "Managing SOA Composite Applications" in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite for more details on the possible operations included in the SQL packages provided. Always use the scripts provided for a correct purge. Deleting rows in just the composite_dn table may leave dangling references in other tables used by the Oracle Fusion Middleware SOA Infrastructure.

11.4 Scaling the Topology

You can scale out and or scale up the enterprise topology. When you scale up the topology, you add new managed servers to nodes that are already running on one or more managed servers. When you scale out the topology, you add new managed servers to new nodes.

This section covers includes the topics:

11.4.1 Scaling Up the Topology (Adding Managed Servers to Existing Nodes)

This section describes how to scale up a topology. Scaling up the topology includes adding managed servers to already existing nodes.

11.4.1.1 Scaling up SOA and WSM

When you scale up the topology, you already have a node that runs a managed server that is configured with SOA components or a managed server with WSM-PM. The node contains a WebLogic Server home and an Oracle Fusion Middleware SOA home in shared storage. Use existing these installations (such as WebLogic Server home, Oracle Fusion Middleware home, and domain directories), when you create the new managed servers called WLS_SOA and WLS_WSM. You do not need to install WLS or SOA binaries at a new location or to run pack and unpack.

  1. Using the Oracle WebLogic Server Administration Console, clone WLS_SOA1 or WLS_WSM1 into a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

    To clone a managed server, complete these steps:

    1. From the Domain Structure window of the Oracle WebLogic Server Administration Console, expand the Environment node and then Servers. The Summary of Servers page appears.

    2. Click Lock and Edit and select the managed server that you want to clone (for example, WLS_SOA1).

    3. Click Clone.

    Name the new managed server WLS_SOAn, where n is a number that identifies the new managed server. In this case, assume that you are adding a new server to Node 1, where WLS_SOA1 was running.

    The remainder of the steps assume that you are adding a new server to SOAHOST1, which is already running WLS_SOA1.

  2. For the listen address, assign the host name or IP to use for this new managed server. If you are planning to use server migration as recommended for this server, enter the VIP (also called a floating IP) to enable it to move to another node. The VIP should be different from the one used by the managed server that is already running.

  3. For WLS_WSM servers, run the Java Object Cache configuration utility again to include the new server in the JOC distributed cache as described in Section 4.17, "Configuring the Java Object Cache for Oracle WSM." You can use the same discover port for multiple WSM-PM servers in the same node. Repeat the steps provided in Section 4.17 for each WSM-PM server and the server list is updated.

  4. Create JMS servers for SOA and UMS on the new managed server.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new SOAJMSServer (which will be created in a later step) and name it, for example, SOAJMSFileStore_N. Specify the path for the store as recommended in Section 2.3, "Shared Storage and Recommended Directory Structure" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/SOAJMSFileStore_N
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.
    2. Create a new JMS server for SOA: for example, SOAJMSServer_N. Use the SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N server to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMS JMS server (which will be created in a later step) and name it, for example, UMSJMSFileStore_N. Specify the path for the store as recommended in Section 2.3, "Shared Storage and Recommended Directory Structure" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/UMSJMSFileStore_N
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.
    4. Create a new JMS Server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N server to the recently created managed server (WLS_SOAn).

    5. For BPM Systems only: Create a new persistence store for the new BPMJMSServer, for example, BPMJMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Section 2.3, "Shared Storage and Recommended Directory Structure."

      ORACLE_BASE/admin/domain_name/cluster_name/jms/BPMJMSFileStore_N.

      Note:

      This directory must exist before the managed server is started or the start operation fails.

      You can also assign SOAJMSFileStore_N as store for the new BPM JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    6. For BPM systems only: Create a new JMS Server for BPM, for example, BPMJMSServer_N. Use the BPMJMSFileStore_N for this JMSServer. Target the BPMJMSServer_N Server to the recently created Managed Server (WLS_SOAn).

    7. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    8. Update the SubDeployment Targets for SOA, UMS and BPM JMS Modules (if applicable) to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule, for BPM: BPMJMSMOdule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N). Click Save and Activate.

  5. Configuring Oracle Coherence for deploying composites for the new server as described in Section 5.5, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field must be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1VHNn

  6. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Section 2.3, "Shared Storage and Recommended Directory Structure."

    From the Administration Console, select the Server_name , and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  7. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAN managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  8. Configure server migration for the new managed server.

    Note:

    Because this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. The floating IP for the new SOA managed server should also be already present.

    To configure server migration using the Oracle WebLogic Server Administration Console, complete these steps:

    1. In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page appears.

    2. Click the name of the server (represented as a hyperlink) in Name column of the table for which you want to configure migration. The settings page for the selected server appears.

    3. Click the Migration subtab.

    4. In the Migration Configuration section, select the servers that participate in migration in the Available window by clicking the right arrow. Select the same migration targets as for the servers that already exist on the node.

      For example, for new managed servers on SOAHOST1, which is already running WLS_SOA1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_SOA2, select SOAHOST1.

      Note:

      The appropriate resources must be available to run the managed servers concurrently during migration.
    5. Choose the Automatic Server Migration Enabled option. This enables the Node Manager to start a failed server on the target node automatically.

    6. Click Save.

    7. Restart the Administration Server, managed servers, and Node Manager.

      To restart the Administration Server, use the procedure in Section 4.7, "Starting the Administration Server on SOAHOST1."

  9. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock and Edit.

    4. Add the new server's address and port to the Cluster address field. For example: ADMINVHN:8011,SOAHOST2VHN1:8011,SOAHOST1VHN1:8001

    5. Save and activate the changes.

  10. Test server migration for this new server. To test migration, perform the following from the node where you added the new server:

    1. Stop the WLS_SOAn managed server.

      To do this, run kill -9 <pid> on the PID of the managed server. You can identify the PID of the node using ps -ef | grep WLS_SOAn.

    2. Monitor the Node Manager Console for a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to attempt a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. The Node Manager should log a message indicating that the server will not be restarted again locally.

11.4.1.2 Scaling Up WebCenter

In this case, you already have a node that runs a managed server configured with Oracle WebCenter components. The node contains a Middleware home and a WebCenter directory in shared storage.

You can use the existing installations (Middleware home, and domain directories) for creating new Oracle WebCenter managed servers. You do not need to install WebCenter binaries in a new location, or run pack and unpack commands.

Note:

Running multiple managed servers on one node is supported only for the WC_Spaces servers and the WC_Portlet servers.

To scale up the topology:

  1. Using the Administration Console, clone WC_Spaces1 or WC_Portlet1 into a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

    To clone a managed server:

    1. In the Administration Console, select Environment, and then Servers.

    2. Select the managed server that you want to clone, for example, WC_Spaces1 or WC_Portlet1.

    3. Select Clone.

    4. Name the new managed server SERVER_NAMEn, where n is a number to identify the new managed server.

  2. For the listen address, assign the host name or IP to use for this new managed server, which should be the same as an existing server.

    Ensure that the port number for this managed server is available on this node.

  3. Add the new managed server to the Java Object Cache Cluster. For details, see Section 6.13, "Setting Up the Java Object Cache."

  4. Reconfigure the Oracle HTTP Server module with the new member in the cluster. For more information see Section 6.19, "Configuring Oracle HTTP Server for the WC_Spacesn, WC_Portletn, and WC_Collaborationn Managed Servers on WCHOST2." Add the host and port of the new server to the end of the WebLogicCluster parameter.

    • For Spaces, add the member to the Location blocks for /webcenter, /webcenterhelp, /rss, /rest, /wcsdocs.

    • For Portlet, add the member to the Location blocks for /portalTools, /wsrp-tools, /richtextportlet, /pageletadmin, /wcps.

11.4.2 Scaling Out the Topology (Adding Managed Servers to New Nodes)

When you scaling out the topology, you add new managed servers configured with SOA and or WSM-PM to new nodes.

11.4.2.1 Scaling out SOA and WSM

Before performing the steps in this section, check that you meet these requirements:

Prerequisites

  • There must be existing nodes running managed servers configured with SOA and WSM-PM within the topology

  • The new node can access the existing home directories for WebLogic Server and SOA. (Use the existing installations in shared storage for creating a new WLS_SOA or WLS_WSM managed server. You do not need to install WebLogic Server or SOA binaries in a new location but you do need to run pack and unpack to bootstrap the domain configuration in the new node.)

  • When an ORACLE_HOME or WL_HOME is shared by multiple servers in different nodes, it is recommended that you keep the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and "attach" an installation in a shared storage to it, use ORACLE_HOME/oui/bin/attachHome.sh. To update the Middleware home list to add or remove a WL_HOME, edit the <user_home>/bea/beahomelist file. See the steps below.

To scale out the topology, complete these steps:

  1. On the new node, mount the existing MW_Home, which should include the SOA installation and the domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

  2. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following command:

    SOAHOSTn>cd ORACLE_COMMON_HOME/oui/bin/attachHome.sh
    SOAHOSTn>./attachHome.sh -jreLoc ORACLE_BASE/fmw/jrockit_160_<version>
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the $HOME/bea/beahomelist file and add MW_HOME to it.

  3. Log in to the Oracle WebLogic Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

  5. Update the machine's Node Manager's address to map the IP of the node that is being used for scale out.

  6. Use the Oracle WebLogic Server Administration Console to clone WLS_SOA1/WLS_WSM1 into a new managed server. Name it WLS_SOAn/WLS_WSM-PMn, where n is a number.

    Note:

    These steps assume that you are adding a new server to node n, where no managed server was running previously.
  7. Assign the host name or IP to use for the new managed server for the listen address of the managed server.

    If you are planning to use server migration for this server (which Oracle recommends) this should be the VIP (also called a floating IP) for the server. This VIP should be different from the one used for the existing managed server.

  8. For WLS_WSM servers, run the Java Object Cache configuration utility again to include the new server in the JOC distributed cache as described in Section 4.17, "Configuring the Java Object Cache for Oracle WSM."

  9. Create JMS Servers for SOA, BPM, (if applicable) and UMS on the new managed server.

    Note:

    These steps are not required for scaling out the WSM_PM managed server, only for WLS_SOA managed servers. They are not required either to scale up the BAM Web Applications system.

    Create the JMS servers for SOA and UMS as follows:

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new SOAJMSServer (which will be created in a later step) and name it, for example, SOAJMSFileStore_N. Specify the path for the store as recommended in Section 2.3, "Shared Storage and Recommended Directory Structure," as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/SOAJMSFileStore_N
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.
    2. Create a new JMS server for SOA, for example, SOAJMSServer_N. Use the SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N Server to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMSJMSServer, and name it, for example, UMSJMSFileStore_N. As the directory for the persistent store, specify the path recommended in Section 2.3, "Shared Storage and Recommended Directory Structure," as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/UMSJMSFileStore _N
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.
    4. Create a new JMS server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N Server to the recently created managed server (WLS_SOAn).

    5. For BPM Systems only: Create a new persistence store for the new BPMJMSServer, for example, BPMJMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Section 2.3, "Shared Storage and Recommended Directory Structure."

      ORACLE_BASE/admin/domain_name/cluster_name/jms/BPMJMSFileStore_N.

      Note:

      This directory must exist before the managed server is started or the start operation fails.

      You can also assign SOAJMSFileStore_N as store for the new BPM JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    6. For BPM systems only: Create a new JMS Server for BPM, for example, BPMJMSServer_N. Use the BPMJMSFileStore_N for this JMSServer. Target the BPMJMSServer_N Server to the recently created Managed Server (WLS_SOAn).

    7. Update the SubDeployment targets for the SOA JMS Module to include the recently created SOA JMS server. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click SOAJMSModuleUDDs (represented as a hyperlink in the Names column of the table). The Settings page for SOAJMSModuleUDDs appears. Open the SubDeployments tab. The SOAJMSSubDM subdeployment appears.

      Note:

      This subdeployment module results from updating the JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2) with the Uniform Distributed Destination Script (soa-createUDD.py), which is required for the initial Enterprise Deployment topology setup.

      Click on it. Add the new JMS server for SOA called SOAJMSServer_N to this subdeployment. Click Save.

    8. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    9. Update the SubDeployment Targets for SOA, UMS and BPM JMS Modules (if applicable) to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule, for BPM: BPMJMSMOdule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N). Click Save and Activate.

  10. Run the pack command on SOAHOST1 to create a template pack as follows:

    SOAHOST1> cd ORACLE_COMMON_HOME/common/bin
    
    SOAHOST1> ./pack.sh -managed=true -domain=ORACLE_BASE/admin/domain_name/aserver/domain_name
    -template=soadomaintemplateScale.jar -template_name=soa_domain_templateScale
    

    Run the following command on SOAHOST1 to copy the template file created to SOAHOSTN

    SOAHOST1> scp soadomaintemplateScale.jar oracle@SOAHOSTN:/ ORACLE_COMMON_HOME/common/bin
    

    Run the unpack command on SOAHOSTN to unpack the template in the managed server domain directory as follows:

    SOAHOSTN> cd ORACLE_COMMON_HOME/common/bin
    
    SOAHOSTN> ./unpack.sh -domain=ORACLE_BASE/admin/domain_name
    /mserver/domain_name/
    -template=soadomaintemplateScale.jar
    -app_dir=ORACLE_BASE/admin/domain_name/mserver/apps 
    
  11. Configuring Oracle Coherence for deploying composites for the new server as described in Section 5.5, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field needs to be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1VHNn

  12. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Section 2.3, "Shared Storage and Recommended Directory Structure."

    From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  13. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  14. Start Node Manager on the new node. To start Node Manager, use the installation in shared storage from the existing nodes, and start Node Manager by passing the host name of the new node as a parameter as follows:

    SOAHOSTN> WL_HOME/server/bin/startNodeManager
    
  15. Start and test the new managed server from the Oracle WebLogic Server Administration Console.

    1. Ensure that the newly created managed server, WLS_SOAn, is running.

    2. Access the application on the load balancer (https://soa.mycompany.com/soa-infra). The application should be functional.

      Note:

      The HTTP Servers in the topology should round robin requests to the newly added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in Oracle HTTP Server's mod_wl_ohs.conf file. However, routing to new servers in the cluster takes place only if at least one of the servers listed in the WebLogicCluster directive is running.
  16. Configure server migration for the new managed server.

    Note:

    Because this new node uses an existing shared storage installation, the node already is using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges. The floating IP for the new SOA Managed Server is already present in the new node.

    Log into the Oracle WebLogic Server Administration Console and configure server migration following these steps:

    1. Expand the Environment node in the Domain Structure windows and then choose Servers. The Summary of Servers page appears.

    2. Select the server (represented as hyperlink) for which you want to configure migration from the Names column of the table. The Setting page for that server appears.

    3. Click the Migration tab.

    4. In the Available field of the Migration Configuration section, click the right arrow to select the machines to which to allow migration.

      Note:

      Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional managed server.
    5. Select Automatic Server Migration Enabled. This enables the Node Manager to start a failed server on the target node automatically.

    6. Click Save.

    7. Restart the Administration Server, managed servers, and the Node Manager.

      To restart the Administration Server, use the procedure in Section 4.7, "Starting the Administration Server on SOAHOST1."

  17. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock and Edit.

    4. Add the new server's address and port to the Cluster address field. For example: ADMINVHN:8011,SOAHOST2VHN1:8011,SOAHOSTNVHN1:8001

    5. Save and activate the changes.

  18. Test server migration for this new server. Follow these steps from the node where you added the new server:

    1. Abruptly stop the WLS_SOAn managed server by running kill -9 <pid> on the PID of the managed server. You can identify the PID of the node using ps -ef | grep WLS_SOAn.

    2. In the Node Manager Console you should see a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. Now Node Manager should log a message indicating that the server will not be restarted again locally.

11.4.2.2 Scaling Out WebCenter

In scaling out your topology, you add new managed servers, configured with Oracle WebCenter applications, to new nodes.

Before performing the steps in this section, check that you meet these requirements:

  • In your topology, there are existing nodes running managed servers configured with WebCenter applications.

  • The new node can access the existing home directories for WebLogic Server and Oracle WebCenter. You use the existing installations in shared storage for creating a new managed server. There is no need to install WebLogic Server or WebCenter binaries in a new location, although you need to run pack and unpack to create a managed server domain.

  • WC_Spaces and WC_Utilities servers must be either both scaled out on the new node, or both not scaled out. This is because of the local affinity between WebCenter Spaces and the Analytics application.

To scale out the topology:

  1. On the new node, mount the existing Middleware home, which should include the WebCenter installation and the domain directory, and ensure that the new node has access to this directory, just as the rest of the nodes in the domain do.

  2. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following commands:

    WCHOSTn> cd ORACLE_BASE/product/fmw/wc/
    WCHOSTn> ./attachHome.sh -jreLoc ORACLE_BASE/fmw/jrockit_160_<version>
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the MW_HOME/bea/beahomelist file and add ORACLE_BASE/product/fmw to it.

  3. Log into the Oracle WebLogic Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

  5. Update the machine's Node Manager's address to map the IP address of the node that is being used for scale out.

  6. Use the Oracle WebLogic Server Administration Console to clone either WC_Spaces1 or WC_Portlet1 or WC_Collaboration1 or WC_Utilities1 into a new managed server. Name it WLS_XXXn, where n is a number and assign it to the new machine.

  7. For the listen address, assign the host name or IP to use for the new managed server. Perform these steps to set the managed server listen address:

    1. Log into the Oracle WebLogic Server Administration Console.

    2. In the Change Center, click Lock & Edit.

    3. Expand the Environment node in the Domain Structure window.

    4. Click Servers. The Summary of Servers page appears.

    5. Select the managed server with the listen address you want to update in the Names column of the table. The Setting page for that managed server appears.

    6. Set the Listen Address to WCHOSTn where WCHOSTn is the DNS name of your new machine.

    7. Click Save.

    8. Save and activate the changes.

      The changes do not take effect until the managed server is restarted.

  8. Run the pack command on SOAHOST1 to create a template pack and unpack onto WCHOSTn.

    These steps are documented in Section 6.7, "Propagating the Domain Configuration to SOAHOST2, WCHOST1, and WCHOST2 Using the unpack Utility."

  9. Start the Node Manager on the new node. To start the Node Manager, use the installation in shared storage from the existing nodes, and start Node Manager by passing the host name of the new node as a parameter as follows:

    WCHOSTn> WL_HOME/server/bin/startNodeManager new_node_ip
    
  10. If this is a new Collaboration managed server:

    1. Ensure that you have followed the steps in Section 6.15, "Configuring Clustering for Discussions Server," to configure clustering for the new Discussions Server.

    2. Ensure also that the steps in Section 6.14, "Converting Discussions Forum from Multicast to Unicast" are performed, using the hostname of the new host for the coherence.localhost parameter.

  11. If this is a new Utilities managed server, ensure that Activity Graph is disabled by following the steps in Section 6.17, "Configuring Activity Graph." Ensure also that the steps for configuring a new Analytics Collector in Section 6.16, "Configuring the Analytics Collectors" have been followed for the Utilities and the local Spaces Server.

  12. Start and test the new managed server from the Oracle WebLogic Server Administration Console:

    1. Ensure that the newly created managed server, WLS_SOAn, is running.

    2. Access the application on the load balancer (https://soa.mycompany.com/soa-infra). The application should be functional.

      Note:

      The HTTP Servers in the topology should round robin requests to the newly added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in Oracle HTTP Server's mod_wl_ohs.conf file. However, routing to new servers in the cluster takes place only if at least one of the servers listed in the WebLogicCluster directive is running.

11.5 Performing Backups and Recoveries

Table 11-1 lists the static artifacts to back up in the 11g Oracle WebCenter enterprise deployment.

Table 11-1 Static Artifacts to Back Up in the 11g Oracle WebCenter Enterprise Deployment

Type Host Location Tier

ORACLE HOME (DB)

RAC Database hosts - CUSTDBHOST1 and CUSTDBHOST2

The location is user-defined

Directory Tier

MW HOME (SOA + WC)

SOAHOST1 and SOAHOST2 - SOA

WCHOST1 and WCHOST2 - WC

MW_HOME on all hosts

Application Tier

ORACLE HOME (OHS)

WEBHOST1 and WEBHOST2

ORACLE_BASE/admin/<instance_name>

Web Tier

ORACLE HOME (OCS)

WCHOST1 and WCHOST2

On shared disk: /share/oracle/ucm

On each host, local files at ORACLE_HOME/ucm

Application Tier

Installation-related files

 

OraInventory, <user_home>/bea/beahomelist, oraInst.loc, oratab

 

Table 11-2 lists the runtime artifacts for back up in the 11g Oracle WebCenter enterprise deployment.

Table 11-2 Run-Time Artifacts to Back Up in the 11g Oracle WebCenter Enterprise Deployment

Type Host Location Tier

DOMAIN HOME

SOAHOST1

SOAHOST2

WCHOST1

WCHOST2

ORACLE_BASE/admin/<domain_name>/mserver/<domain_name>

Application Tier

Application artifacts (ear and war files)

SOAHOST1

SOAHOST2

WCHOST1

WCHOST2

Look at all the deployments through admin console and get all the application artifacts

Application Tier

OHS INSTANCE HOME

WEBHOST1 and WEBHOST2

On WEBHOST1, ORACLE_HOME/ohs_1/instances/instance1

On WEBHOST2, ORACLE_HOME/ohs_2/instances/instance2

Web Tier

OHS OCS configuration files

WEBHOST1 and WEBHOST2

On each host, at /share/oracle/ucm, which is a local file system.

Web Tier

RAC databases

CUSTDBHOST1 and CUSTDBHOST2

The location is user-defined

Directory Tier

OCS content repository

 

Database-based

Directory Tier


For more information on backup and recovery of Oracle Fusion Middleware components, see Oracle Fusion Middleware Administrator's Guide.

Note:

ORACLE_HOME should be backed up if any changes are made to the XEngine configuration that are part of your B2B setup. These files are located under ORACLE_HOME/soa/thirdparty/edifecs/XEngine. To back up ORACLE_HOME, execute the following command:
SOAHOST1> tar -cvpf fmwhomeback.tar MW_HOME

11.6 Troubleshooting

This section covers the following topics:

11.6.1 Administration Server Fails to Start After a Manual Failover

Problem: Administration Server fails to start after the Administration Server node failed and manual failover to another nodes is performed. The Administration Server output log reports the following:

<Feb 19, 2009 3:43:05 AM PST> <Warning> <EmbeddedLDAP> <BEA-171520> <Could not obtain an exclusive lock for directory: ORACLE_BASE/admin/soadomain/aserver/soadomain/servers/AdminServer/data/ldap/ldapfiles. Waiting for 10 seconds and then retrying in case existing WebLogic Server is still shutting down.>

Solution: When restoring a node after a node crash and using shared storage for the domain directory, you may see this error in the log for the Administration Server due to unsuccessful lock cleanup. To resolve this error, remove the file ORACLE_BASE/admin/<domain_name>/aserver/<domain_name>/servers/AdminServer/data/ldap/ldapfiles/ EmbeddedLDAP.lok.

11.6.2 Error While Activating Changes in Administration Console

Problem: Activation of changes in Administration Console fails after changes to a server's start configuration have been performed. The Administration Console reports the following when clicking "Activate Changes":

An error occurred during activation of changes, please see the log for details.
 [Management:141190]The commit phase of the configuration update failed with an exception:
In production mode, it's not allowed to set a clear text value to the property: PasswordEncrypted of ServerStartMBean

Solution: This may happen when start parameters are changed for a server in the Administration Console. In this case, either provide username/password information in the server start configuration in the Administration Console for the specific server whose configuration was being changed, or remove the <password-encrypted></password-encrypted> entry in the config.xml file (this requires a restart of the Administration Server).

11.6.3 OAM Configuration Tool Does Not Remove URLs

Problem: The OAM Configuration Tool has been used and a set of URLs was added to the policies in Oracle Access Manager. One of multiple URLs had a typo. Executing the OAM Configuration Tool again with the correct URLs completes successfully; however, when accessing Policy Manager, the incorrect URL is still there.

Solution: The OAM Configuration Tool only adds new URLs to existing policies when executed with the same app_domain name. To remove a URL, use the Policy Manager Console in OAM. Log on to the Access Administration site for OAM, click on My Policy Domains, click on the created policy domain (SOA_EDG), then on the Resources tab, and remove the incorrect URLs.

11.6.4 Portlet Unavailable After Database Failover

Problem: While creating a page inside WebCenter Spaces, if you add a portlet to the page and a database failover occurs, an error component may added to the page with the following message showing on it:

"Error"
"Portlet unavailable"

This message remains even if you refresh the page or log out and back in again.

Solution: To resolve this issue, delete the component and add it again.

11.6.5 Redirecting of Users to Login Screen After Activating Changes in Administration Console

Problem: After configuring OHS and load balancer to access the Oracle WebLogic Administration Console, some activation changes cause the redirection to the login screen for the admin console.

Solution: This is the result of the console attempting to follow changes to port, channel, and security settings as a user makes these changes. For certain changes, the console may redirect to the Administration Server's listen address. Activation is completed regardless of the redirection. It is not required to log in again; users can simply update the URL to wc.mycompany.com/console/console.portal and directly access the home page for the Administration Console.

Note:

This problem will not occur if you have disabled tracking of the changes described in this section.

11.6.6 Redirecting of Users to Administration Console's Home Page After Activating Changes to OAM

Problem: After configuring OAM, some activation changes cause the redirection to the Administration Console's home page (instead of the context menu where the activation was performed).

Solution: This is expected when OAM SSO is configured and is the result of the redirections performed by the Administration Server. Activation is completed regardless of the redirection. If required, users may "manually" navigate again to the desired context menu.

11.6.7 Configured JOC Port Already in Use

Problem: Attempts to start a Managed Server that uses the Java Object Cache, such as OWSM or WebCenter Spaces Managed Servers, fail. The following errors appear in the logs:

J2EE JOC-058 distributed cache initialization failure
J2EE JOC-043 base exception:
J2EE JOC-803 unexpected EOF during read.

Solution: Another process is using the same port that JOC is attempting to obtain. Either stop that process, or reconfigure JOC for this cluster to use another port in the recommended port range.

11.6.8 Restoring a JMS Configuration

Problem: A mistake in the parameters passed to the soa-createUDD.py script, or some other mistake causes the JMS configuration for SOA or BAM clusters to fail.

Solution: Use soa-createUDD.py to restore the configuration.

If a mistake is made while running the soa-createUDD.py script after the SOA cluster is created from the Oracle Fusion Middleware Configuration Wizard (an incorrect option is used, a target is modified, or a module is deleted accidentally). In these situations you can use the soa-createUDD.py script to restore the appropriate JMS configuration using the following steps:

  1. Delete the existing SOA JMS resources (JMS Modules owned by the soa-infrastructure system).

  2. Run the soa-createUDD.py again. The script assume the JMS Servers created for SOA are preserved and creates the destinations and subdeployment modules required to use Uniform Distributed Destinations for SOA. In this case, the script should be executed with the option --soacluster. After running the script again, verified from the WebLogic Server Administration Console that the following artifacts exist (Domain Structure, Services, Messaging, JMS Modules):

    SOAJMSModuleUDDs        ---->SOAJMSSubDM targeted to SOAJMSServer_auto_1 and SOAJMSServer_auto_2
    UMSJMSSystemResource    ---->UMSJMSSubDMSOA targeted to UMSJMSServer_auto_1 and UMSJMSServer_auto_2
    

11.6.9 Spaces Server Does Not Start after Propagation of Domain

Problem: Spaces server fails to start after propagation of the domain configuration to SOAHOST2, WCHOST1 and WCHOST2 using the unpack utility:

[Deployer:149158]No application files exist at '/u01/app/oracle/admin/wcedg_domain/apps/wcedg_domain/custom.webcenter.spaces.fwk'... 

Solution: Copy all the files from the managed server applications location to the one expected by the managed server deployer. For example:

cp /u01/app/oracle/admin/wcedg_domain/mserver/apps/* /u01/app/oracle/admin/wcedg_domain/apps/wcedg_domain/

Note:

Make sure /u01/app/oracle/admin/wcedg_domain/apps/wcedg_domain/ exists before copying the contents.

11.7 Best Practices

This section covers the following topics:

11.7.1 Preventing Timeouts for SQLNet Connections

Much of the Enterprise Deployment production deployment involves firewalls. Because database connections are made across firewalls, Oracle recommends that the firewall be configured so that the database connection is not timed out. For Oracle Real Application Clusters (RAC), the database connections are made on Oracle RAC VIPs and the database listener port. You must configure the firewall to not time out such connections. If such a configuration is not possible, set the*SQLNET.EXPIRE_TIME=n* parameter in the ORACLE_HOME/network/admin/sqlnet.ora file on the database server, where n is the time in minutes. Set this value to less than the known value of the timeout for the network device (that is, a firewall). For RAC, set this parameter in all of the Oracle home directories.

11.7.2 Auditing

Oracle Fusion Middleware Audit Framework is a new service in Oracle Fusion Middleware 11g, designed to provide a centralized audit framework for the middleware family of products. The framework provides audit service for platform components such as Oracle Platform Security Services (OPSS) and Oracle Web Services. It also provides a framework for JavaEE applications, starting with Oracle's own JavaEE components. JavaEE applications will be able to create application-specific audit events. For non-JavaEE Oracle components in the middleware such as C or JavaSE components, the audit framework also provides an end-to-end structure similar to that for JavaEE applications.

Figure 11-1 is a high-level architectural diagram of the Oracle Fusion Middleware Audit Framework.

Figure 11-1 Audit Event Flow

Description of Figure 11-1 follows
Description of "Figure 11-1 Audit Event Flow"

The Oracle Fusion Middleware Audit Framework consists of the following key components:

  • Audit APIs

    These are APIs provided by the audit framework for any audit-aware components integrating with the Oracle Fusion Middleware Audit Framework. During runtime, applications may call these APIs where appropriate to audit the necessary information about a particular event happening in the application code. The interface allows applications to specify event details such as username and other attributes needed to provide the context of the event being audited.

  • Audit Events and Configuration

    The Oracle Fusion Middleware Audit Framework provides a set of generic events for convenient mapping to application audit events. Some of these include common events such as authentication. The framework also allows applications to define application-specific events.

    These event definitions and configurations are implemented as part of the audit service in Oracle Platform Security Services. Configurations can be updated through Enterprise Manager (UI) and WLST (command-line tool).

  • Audit Bus-stop

    Bus-stops are local files containing audit data before they are pushed to the audit repository. In the event where no database repository is configured, these bus-stop files can be used as a file-based audit repository. The bus-stop files are simple text files that can be queried easily to look up specific audit events. When a DB-based repository is in place, the bus-stop acts as an intermediary between the component and the audit repository. The local files are periodically uploaded to the audit repository based on a configurable time interval.

  • Audit Loader

    As the name implies, audit loader loads the files from the audit bus-stop into the audit repository. In the case of platform and JavaEE application audit, the audit loader is started as part of the JavaEE container start-up. In the case of system components, the audit loader is a periodically spawned process.

  • Audit Repository

    Audit Repository contains a pre-defined Oracle Fusion Middleware Audit Framework schema, created by Repository Creation Utility (RCU). Once configured, all the audit loaders are aware of the repository and upload data to it periodically. The audit data in the audit repository is expected to be cumulative and will grow overtime. Ideally, this should not be an operational database used by any other applications - rather, it should be a standalone RDBMS used for audit purposes only. In a highly available configuration, Oracle recommends that you use an Oracle Real Application Clusters (RAC) database as the audit data store.

  • Oracle Business Intelligence Publisher

    The data in the audit repository is exposed through pre-defined reports in Oracle Business Intelligence Publisher. The reports allow users to drill down the audit data based on various criteria. For example:

    • Username

    • Time Range

    • Application Type

    • Execution Context Identifier (ECID)

For more introductory information for the Oracle Fusion Middleware Audit Framework, see the "Introduction to Oracle Fusion Middleware Audit Framework" chapter in the Oracle Fusion Middleware Security Guide.

For information on how to configure the repository for Oracle Fusion Middleware Audit Framework, see the "Configuring and Managing Auditing" chapter in the Oracle Fusion Middleware Security Guide.

The Enterprise Deployment topology does not include Oracle Fusion Middleware Audit Framework configuration. The ability to generate audit data to the bus-stop files and the configuration of the audit loader will be available once the products are installed. The main consideration is the audit database repository where the audit data is stored. Because of the volume and the historical nature of the audit data, it is strongly recommended that customers use a separate database from the operational store or stores being used for other middleware components.