16 Managing the Topology for an Enterprise Deployment

This chapter describes some operations that you can perform after you have set up the topology. These operations include monitoring, scaling, and backing up your topology.

This chapter contains the following sections:

16.1 Overview of Managing Monitoring the Topology

After configuring the WebCenter Portal enterprise deployment, use the information in this chapter to manage the topology.

For information on monitoring the topology and WebCenter Portal applications, see "Monitoring Oracle WebCenter Portal Performance" in Oracle Fusion Middleware Administrator's Guide for Oracle WebCenter Portal.

At some point you may need to expand the topology by scaling it up, or out. See Section 16.4, "Scaling Up the Topology (Adding Managed Servers to Existing Nodes)" and Section 16.5, "Scaling Out the Topology (Adding Managed Servers to New Nodes)" for information about the difference between scaling up and scaling out, and instructions for performing these tasks.

Back up the topology before and after any configuration changes. Section 16.6, "Performing Backups and Recoveries in WebCenter Portal Deployments" provides information about the directories and files that should be back up to protect against failure as a result of configuration changes.

This chapter also documents solutions for possible know issues that may occur after you have configured the topology.

16.2 Managing Space in the SOA Infrastructure Database

Although not all composites may use the database frequently, the service engines generate a considerable amount of data in the CUBE_INSTANCE and MEDIATOR_INSTANCE schemas. Lack of space in the database may prevent SOA composites from functioning.

To manage space in the SOA infrastructure database:

  • Watch for generic errors, such as "oracle.fabric.common.FabricInvocationException" in the Oracle Enterprise Manager Fusion Middleware Control console (dashboard for instances).

  • Search in the SOA server's logs for errors, such as:

    Error Code: 1691
    ...
    ORA-01691: unable to extend lob segment
    SOAINFRA.SYS_LOB0000108469C00017$$ by 128 in tablespace SOAINFRA
    

    These messages are typically indicators of space issues in the database that may likely require adding more data files or more space to the existing files. The SOA Database Administrator should determine the extension policy and parameters to be used when adding space.

  • Purge old composite instances to reduce the SOA Infrastructure database's size. Oracle does not recommend using the Oracle Enterprise Manager Fusion Middleware Control for this type of operation. In most cases the operations cause a transaction time out. There are specific packages provided with the Repository Creation Utility to purge instances. For example:

    DECLARE 
      FILTER INSTANCE_FILTER := INSTANCE_FILTER(); 
     
       MAX_INSTANCES NUMBER; 
      DELETED_INSTANCES NUMBER; 
      PURGE_PARTITIONED_DATA BOOLEAN := TRUE; 
     BEGIN 
       . 
      FILTER.COMPOSITE_PARTITION_NAME:="default"; 
      FILTER.COMPOSITE_NAME := "FlatStructure"; 
      FILTER.COMPOSITE_REVISION := "10.0";   
      FILTER.STATE := fabric. STATE_UNKNOWN; 
      FILTER.MIN_CREATED_DATE := to_timestamp("2010-09-07","YYYY-MM-DD"); 
      FILTER.MAX_CREATED_DATE := to_timestamp("2010-09-08","YYYY-MM-DD"); 
      MAX_INSTANCES := 1000; 
     . 
      DELETED_INSTANCES := FABRIC.DELETE_COMPOSITE_INSTANCES( 
        FILTER => FILTER, 
        MAX_INSTANCES => MAX_INSTANCES, 
        PURGE_PARTITIONED_DATA => PURGE_PARTITIONED_DATA 
      );
    

    This deletes the first 1000 instances of the FlatStructure composite (version 10) created between '2010-09-07' and '2010-09-08' that are in "UNKNOWN" state. For more information on the possible operations included in the SQL packages provided, see "Managing SOA Composite Applications" in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite. Always use the scripts provided for a correct purge. Deleting rows in just the composite_dn table may leave dangling references in other tables used by the Oracle Fusion Middleware SOA Infrastructure.

16.3 Configuring UMS Drivers

UMS driver configuration is not automatically propagated in a SOA cluster. To propagate UMS driver configuration in a cluster:

  • Apply the UMS driver configuration in each server in the Enterprise Deployment topology that is using the driver.

  • If you are using server migration, servers are moved to a different node's domain directory. Pre-create the UMS driver configuration in the failover node. The UMS driver configuration file is located in the following directory:

    ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/tmp/_WL_user/ums_driver_name/*/configuration/driverconfig.xml
    

    Where '*' represents a directory name that is randomly generated by Oracle WebLogic Server during deployment. For example, 3682yq.

Create the UMS driver configuration file in preparation for possible failovers by forcing a server migration, and copy the file from the source node.

It is required to restart the driver for these changes to take effect (that is, for the driver to consume the modified configuration). To restart the driver:

  1. Log on to the Oracle WebLogic Administration Console.

  2. Expand the environment node on the navigation tree.

  3. Click on Deployments.

  4. Select the driver.

  5. Click Stop->When work completes and confirm the operation.

  6. Wait for the driver to transition to the "Prepared" state (refresh the administration console page, if required).

  7. Select the driver again, and click Start->Servicing all requests and confirm the operation.

Verify in Oracle Enterprise Manager Fusion Middleware Control that the properties for the driver have been preserved.

16.4 Scaling Up the Topology (Adding Managed Servers to Existing Nodes)

When you scale up the topology, you add new managed servers to nodes that are already running one or more managed servers. You can use the existing node installations (such as WebLogic Server home, Oracle Fusion Middleware home, and domain directories), when you create the new managed servers. You do not need to install WebLogic Server, SOA or WebCenter Portal binaries at a new location or to run pack and unpack.

When you scale up a server that uses server migration, plan for your appropriate capacity and resource allocation needs. Take the following scenario for example:

  • Server1 exists in node1 and uses server migration in its cluster with server2 on node2.

  • Server3 is added to the cluster in node1 in a scale up operation. It also uses server migration.

In this scenario, a situation may occur where all servers (server1, server2, server3 and admin server) end up running in a node1 or node2. This means each node needs to be designed with enough resources to sustain the worst case scenario where all servers using server migration end in one single node (as defined in the server migration candidate machine configuration).

Note:

A shared domain directory for a managed server with WebCenter Content Server does not work because certain files within the domain, such as intradoc.cfg, are specific to each node. To prevent issues with node-specific files, use a local (per node) domain directory for each WebCenter Content and Inbound Refinery managed server.

This section includes the following topics:

16.4.1 Scaling up Oracle SOA (includes WSM)

To scale up the SOA topology (includes WSM):

  1. Using the Oracle WebLogic Server Administration Console, clone WLS_SOA1 or WLS_WSM1 into a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

    To clone a managed server:

    1. From the Domain Structure window of the Oracle WebLogic Server Administration Console, expand the Environment node and then Servers. The Summary of Servers page appears.

    2. Click Lock & Edit and select the managed server that you want to clone (for example, WLS_SOA1).

    3. Click Clone.

    4. Name the new managed server WLS_SOAn, where n is a number that identifies the new managed server. In this case, you are adding a new server to Node 1, where WLS_SOA1 was running.

    For the remainder of the steps, you are adding a new server to SOAHOST1, which is already running WLS_SOA1.

  2. For the listen address, assign the host name or IP to use for this new managed server. If you are planning to use server migration as recommended for this server, enter the VIP (also called a floating IP) to enable it to move to another node. The VIP should be different from the one used by the managed server that is already running.

  3. For WLS_WSM servers, run the Java Object Cache configuration utility again to include the new server in the JOC distributed cache as described in Section 8.5.5, "Configuring the Java Object Cache for Oracle WSM." You can use the same discover port for multiple WLS_WSM servers in the same node. Repeat the steps provided in Section 8.5.5, "Configuring the Java Object Cache for Oracle WSM" for each WLS_WSM server and the server list is updated.

  4. Create JMS servers for SOA and UMS on the new managed server.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new SOAJMSServer (which will be created in a later step) and name it, for example, SOAJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/SOAJMSFileStore_N
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.

    2. Create a new JMS server for SOA: for example, SOAJMSServer_N. Use the SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N server to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMS JMS server (which will be created in a later step) and name it, for example, UMSJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/UMSJMSFileStore_N
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    4. Create a new JMS Server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N server to the recently created managed server (WLS_SOAn).

    5. For BPM Systems only: Create a new persistence store for the new BPMJMSServer, for example, BPMJMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment":

      ORACLE_BASE/admin/domain_name/cluster_name/jms/BPMJMSFileStore_N

      Note:

      This directory must exist before the managed server is started or the start operation fails.

      You can also assign SOAJMSFileStore_N as store for the new BPM JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    6. For BPM systems only: Create a new JMS Server for BPM, for example, BPMJMSServer_N. Use the BPMJMSFileStore_N for this JMSServer. Target the BPMJMSServer_N Server to the recently created Managed Server (WLS_SOAn).

    7. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    8. Update the SubDeployment Targets for SOA, UMS and BPM JMS Modules (if applicable) to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule, for BPM: BPMJMSMOdule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N). Click Save and Activate.

  5. Configuring Oracle Coherence for deploying composites for the new server as described in Section 9.4, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field must be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1VHNn

  6. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment".

    From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  7. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAN managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  8. Configure server migration for the new managed server. To configure server migration using the Oracle WebLogic Server Administration Console:

    Note:

    Because this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. The floating IP for the new SOA managed server should also be already present.

    1. In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page appears.

    2. Click the name of the server (represented as a hyperlink) in Name column of the table for which you want to configure migration. The settings page for the selected server appears.

    3. Click the Migration subtab.

    4. In the Migration Configuration section, select the servers that participate in migration in the Available window by clicking the right arrow. Select the same migration targets as for the servers that already exist on the node.

      For example, for new managed servers on SOAHOST1, which is already running WLS_SOA1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_SOA2, select SOAHOST1.

      Note:

      The appropriate resources must be available to run the managed servers concurrently during migration.

    5. Choose the Automatic Server Migration Enabled option. This enables the Node Manager to start a failed server on the target node automatically.

    6. Click Save.

    7. Restart the Administration Server, managed servers, and Node Manager.

      To restart the Administration Server, use the procedure in Section 8.4.3, "Starting the Administration Server on SOAHOST1."

  9. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example: ADMINVHN:8011,SOAHOST2VHN1:8011,SOAHOST1VHN1:8001

    5. Save and activate the changes.

  10. Test server migration for this new server. To test migration, perform the following from the node where you added the new server:

    1. Stop the WLS_SOAn managed server.

      To do this, run kill -9 <pid> on the PID of the managed server. You can identify the PID of the node using ps -ef | grep WLS_SOAn.

    2. Monitor the Node Manager Console for a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to attempt a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. Node Manager logs a message indicating that the server will not be restarted again locally.

16.4.2 Scaling Up Oracle WebCenter Portal

To scale up the WebCenter Portal topology:

Note:

Running multiple managed servers on one node is only supported for WC_Spaces and WC_Portlet servers.

  1. Using the WebLogic Server Administration Console, clone WC_Spaces1 or WC_Portlet1 into a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

    To clone a managed server:

    1. In the Administration Console, select Environment, and then Servers.

    2. Click Lock & Edit.

    3. Select the managed server that you want to clone, for example, WC_Spaces1 or WC_Portlet1.

    4. Select Clone.

    5. Name the new managed server WC_SERVERNAMEn, where n is a number to identify the new managed server.

    For the remainder of the steps, you add the new server to WCPHOST1, which is already running WC_Spaces1 or WC_Portlet1.

  2. For the listen address, assign the host name or IP to use for this new managed server, which should be the same as an existing server.

    Ensure that the port number for this managed server is available on this node.

  3. Add the new managed server to the Java Object Cache Cluster. For details, see Section 10.5, "Configuring the Java Object Cache for Spaces_Cluster."

  4. Reconfigure the Oracle HTTP Server module with the new member in the cluster. For more information see Section 10.11.1, "Configuring Oracle HTTP Server for the WC_Spacesn, WC_Portletn, WC_Utilitiesn, and WC_Collaborationn Managed Servers." Add the host and port of the new server to the end of the WebLogicCluster parameter.

    • For WC_Spaces, add the member to the Location blocks for /webcenter, /webcenterhelp, /rss, /rest.

    • For WC_Portlet, add the member to the Location blocks for /portalTools, /wsrp-tools, /pagelets.

16.4.3 Scaling Up Oracle WebCenter Content

Only one Oracle WebCenter Content managed server per node per domain is supported by Oracle Fusion Middleware. To add additional Oracle WebCenter Content managed servers, follow the steps in "Scale-Out Procedure for Oracle WebCenter Content" in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Enterprise Content Management Suite to add an Oracle WebCenter Content managed server to a new node.

16.5 Scaling Out the Topology (Adding Managed Servers to New Nodes)

When you scale out the topology, you add new managed servers configured with SOA, WebCenter Portal, or WebCenter Content to new nodes.

This section includes the following topics:

16.5.1 Scaling Out Oracle SOA (includes WSM)

When you scale out the topology, you add new managed servers configured with SOA and or WSM-PM to new nodes.

Before performing the steps in this section, check that you meet these requirements:

Prerequisites

  • There must be existing nodes running managed servers configured with SOA and WSM-PM within the topology

  • The new node can access the existing home directories for WebLogic Server and SOA. (Use the existing installations in shared storage for creating a new WLS_SOA or WLS_WSM managed server. You do not need to install WebLogic Server or SOA binaries in a new location but you do need to run pack and unpack to bootstrap the domain configuration in the new node.)

  • When an ORACLE_HOME or WL_HOME is shared by multiple servers in different nodes, keep the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and "attach" an installation in a shared storage to it, use the attachHome.sh script in the following location:

    ORACLE_HOME/oui/bin/
    

    To update the Middleware home list to add or remove a WL_HOME, edit the beahomelist file located in the following directory:

    user_home/bea/
    

To scale out the topology:

  1. On the new node, mount the existing MW_Home, which should include the SOA installation and the domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

  2. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following command from SOAHOSTn:

    cd ORACLE_COMMON_HOME/oui/bin/attachHome.sh
    ./attachHome.sh -jreLoc ORACLE_BASE/fmw/jrockit_160_<version>
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the $HOME/bea/beahomelist file and add MW_HOME to it.

  3. Log in to the Oracle WebLogic Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

  5. Update the machine's Node Manager's address to map the IP of the node that is being used for scale out.

  6. Use the Oracle WebLogic Server Administration Console to clone WLS_SOA1/WLS_WSM1 into a new managed server. Name it WLS_SOAn/WLS_WLS_WSMn, where n is a number.

    Note:

    These steps assume that you are adding a new server to node n, where no managed server was running previously.

  7. Assign the host name or IP to use for the new managed server for the listen address of the managed server.

    If you are planning to use server migration for this server (which Oracle recommends) this should be the VIP (also called a floating IP) for the server. This VIP should be different from the one used for the existing managed server.

  8. For WLS_WSM servers, run the Java Object Cache configuration utility again to include the new server in the JOC distributed cache as described in Section 8.5.5, "Configuring the Java Object Cache for Oracle WSM."

  9. Create JMS Servers for SOA, BPM, (if applicable) and UMS on the new managed server.

    Note:

    These steps are not required for scaling out the WLS_WSM managed server, only for WLS_SOA managed servers.

    Create the JMS servers for SOA and UMS as follows:

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new SOAJMSServer (which will be created in a later step) and name it, for example, SOAJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/SOAJMSFileStore_N
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.

    2. Create a new JMS server for SOA, for example, SOAJMSServer_N. Use the SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N Server to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMSJMSServer, and name it, for example, UMSJMSFileStore_N. As the directory for the persistent store, specify the path recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/UMSJMSFileStore _N
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    4. Create a new JMS server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N Server to the recently created managed server (WLS_SOAn).

    5. For BPM Systems only: Create a new persistence store for the new BPMJMSServer, for example, BPMJMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment".

      ORACLE_BASE/admin/domain_name/cluster_name/jms/BPMJMSFileStore_N.

      Note:

      This directory must exist before the managed server is started or the start operation fails.

      You can also assign SOAJMSFileStore_N as store for the new BPM JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    6. For BPM systems only: Create a new JMS Server for BPM, for example, BPMJMSServer_N. Use the BPMJMSFileStore_N for this JMSServer. Target the BPMJMSServer_N Server to the recently created Managed Server (WLS_SOAn).

    7. Update the SubDeployment targets for the SOA JMS Module to include the recently created SOA JMS server. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click SOAJMSModuleUDDs (represented as a hyperlink in the Names column of the table). The Settings page for SOAJMSModuleUDDs appears. Open the SubDeployments tab. The SOAJMSSubDM subdeployment appears.

      Note:

      This subdeployment module results from updating the JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2) with the Uniform Distributed Destination Script (soa-createUDD.py), which is required for the initial Enterprise Deployment topology setup.

      Click on it. Add the new JMS server for SOA called SOAJMSServer_N to this subdeployment. Click Save.

    8. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    9. Update the SubDeployment Targets for SOA, UMS and BPM JMS Modules (if applicable) to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule, for BPM: BPMJMSMOdule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N). Click Save and Activate.

  10. Run the pack command on SOAHOST1 to create a template pack as follows:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=ORACLE_BASE/admin/domain_name/aserver/domain_name
    -template=soadomaintemplateScale.jar -template_name=soa_domain_templateScale
    

    Run the following command on SOAHOST1 to copy the template file created to SOAHOSTN

    SOAHOST1> scp soadomaintemplateScale.jar oracle@SOAHOSTN:/ ORACLE_COMMON_HOME/common/bin
    

    Run the unpack command on SOAHOSTn to unpack the template in the managed server domain directory as follows:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./unpack.sh -domain=ORACLE_BASE/admin/domain_name
    /mserver/domain_name/
    -template=soadomaintemplateScale.jar
    -app_dir=ORACLE_BASE/admin/domain_name/mserver/apps 
    
    
  11. Configuring Oracle Coherence for deploying composites for the new server as described in Section 9.4, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field needs to be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1VHNn

  12. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment".

    From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  13. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  14. Start Node Manager on the new node. To start Node Manager, use the installation in shared storage from the existing nodes, and start Node Manager by passing the host name of the new node as a parameter as follows:

    SOAHOSTN> WL_HOME/server/bin/startNodeManager
    
  15. Start and test the new managed server from the Oracle WebLogic Server Administration Console.

    1. Ensure that the newly created managed server, WLS_SOAn, is running.

    2. Access the application on the load balancer (https://soa.mycompany.com/soa-infra). The application should be functional.

      Note:

      The HTTP Servers in the topology should round robin requests to the newly added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in Oracle HTTP Server's mod_wl_ohs.conf file. However, routing to new servers in the cluster takes place only if at least one of the servers listed in the WebLogicCluster directive is running.

  16. Configure server migration for the new managed server.

    Note:

    Because this new node uses an existing shared storage installation, the node already is using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges. The floating IP for the new SOA Managed Server is already present in the new node.

    Log into the Oracle WebLogic Server Administration Console and configure server migration:

    1. Expand the Environment node in the Domain Structure windows and then choose Servers. The Summary of Servers page appears.

    2. Select the server (represented as hyperlink) for which you want to configure migration from the Names column of the table. The Setting page for that server appears.

    3. Click the Migration tab.

    4. In the Available field of the Migration Configuration section, click the right arrow to select the machines to which to allow migration.

      Note:

      Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional managed server.

    5. Select Automatic Server Migration Enabled. This enables the Node Manager to start a failed server on the target node automatically.

    6. Click Save.

    7. Restart the Administration Server, managed servers, and the Node Manager.

      To restart the Administration Server, use the procedure in Section 8.4.3, "Starting the Administration Server on SOAHOST1."

  17. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example: ADMINVHN:8011,SOAHOST2VHN1:8011,SOAHOSTNVHN1:8001

    5. Save and activate the changes.

  18. Test server migration for this new server from the node where you added the new server:

    1. Stop the WLS_SOAn managed server by running the following command on the PID (process ID) of the managed server:

      kill -9 pid
      

      You can identify the PID of the node using the following command:

      ps -ef | grep WLS_SOAn
      

      Note:

      For Windows, you can terminate the Managed Server using the taskkill command. For example:

      taskkill /f /pid pid
      

      Where pid is the process ID of the Managed Server.

      To determine the process ID of the WLS_SOAn Managed Server, run the following command:

      MW_HOME\jrockit_160_20_D1.0.1-2124\bin\jps -l -v
      
    2. In the Node Manager Console you should see a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. Now Node Manager should log a message indicating that the server will not be restarted again locally.

16.5.2 Scaling Out Oracle WebCenter Portal

In scaling out your topology, you add new managed servers, configured with Oracle WebCenter Portal applications, to new nodes.

Before performing the steps in this section, check that you meet these requirements:

Prerequisites

  • There must be existing nodes running managed servers configured with WebCenter Portal within the topology.

  • The new node can access the existing home directories for WebLogic Server and Oracle WebCenter Portal. You use the existing installations in shared storage for creating a new managed server. There is no need to install WebLogic Server or WebCenter Portal binaries in a new location, although you need to run pack and unpack to create a managed server domain.

  • Both WC_Spaces and WC_Utilities servers must be scaled out or not scaled out on the new node. This is because of the local affinity between WebCenter Portal: Spaces and the Analytics application.

To scale out the topology:

  1. On the new node, mount the existing Middleware home, which should include the WebCenter Portal installation and the domain directory, and ensure that the new node has access to this directory, just as the rest of the nodes in the domain do.

  2. Attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following commands:

    WCPHOSTn> cd ORACLE_BASE/product/fmw/wc/
    WCPHOSTn> ./attachHome.sh -jreLoc ORACLE_BASE/fmw/jrockit_160_<version>
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the MW_HOME/bea/beahomelist file and add ORACLE_BASE/product/fmw to it.

  3. Log in to the Oracle WebLogic Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

  5. Update the machine's Node Manager's address to map the IP address of the node that is being used for scale out.

  6. Use the Oracle WebLogic Server Administration Console to clone either WC_Spaces1 or WC_Portlet1 or WC_Collaboration1 or WC_Utilities1 into a new managed server. Name it WC_XXXn, where n is a number and assign it to the new machine.

  7. For the listen address, assign the host name or IP to use for the new managed server. Perform these steps to set the managed server listen address:

    1. Log into the Oracle WebLogic Server Administration Console.

    2. In the Change Center, click Lock & Edit.

    3. Expand the Environment node in the Domain Structure window.

    4. Click Servers. The Summary of Servers page appears.

    5. Select the managed server with the listen address you want to update in the Names column of the table. The Setting page for that managed server appears.

    6. Set the Listen Address to WCPHOSTn where WCPHOSTn is the DNS name of your new machine.

    7. Click Save.

    8. Save and activate the changes.

      The changes do not take effect until the managed server is restarted.

  8. Run the pack command on SOAHOST1 to create a template pack and unpack onto WCPHOSTn.

    These steps are documented in Section 10.4.1, "Propagating the Domain Configuration to SOAHOST2, WCPHOST1, and WCPHOST2 Using the unpack Utility."

  9. Start the Node Manager on the new node. To start the Node Manager, use the installation in shared storage from the existing nodes, and start Node Manager by passing the host name of the new node as a parameter as follows:

    WCPHOSTn> WL_HOME/server/bin/startNodeManager new_node_ip
    
  10. If this is a new Collaboration managed server:

    1. Ensure that you have followed the steps in Section 10.7, "Configuring Clustering on the Discussions Server," to configure clustering for the new Discussions Server.

    2. Ensure also that the steps in Section 10.6, "Converting Discussions from Multicast to Unicast" are performed, using the hostname of the new host for the coherence.localhost parameter.

  11. If this is a new Utilities managed server, ensure that Activity Graph is disabled by following the steps in Section 10.9, "Configuring Activity Graph." Ensure also that the steps for configuring a new Analytics Collector in Section 10.8, "Configuring Analytics" have been followed for the Utilities and the local Spaces Server.

  12. Start and test the new managed server from the Oracle WebLogic Server Administration Console:

    1. Ensure that the newly created managed server, WLS_SOAn, is running.

    2. Access the application on the load balancer (https://soa.mycompany.com/soa-infra). The application should be functional.

      Note:

      The HTTP Servers in the topology should round robin requests to the newly added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in Oracle HTTP Server's mod_wl_ohs.conf file. However, routing to new servers in the cluster takes place only if at least one of the servers listed in the WebLogicCluster directive is running.

16.5.3 Scaling Out Oracle WebCenter Content

To add additional Oracle WebCenter Content managed servers, follow the steps in "Scale-Out Procedure for Oracle WebCenter Content" in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Enterprise Content Management Suite to add an Oracle WebCenter Content managed server to a new node.

16.6 Performing Backups and Recoveries in WebCenter Portal Deployments

Table 16-1 lists the static artifacts to back up in WebCenter Portal enterprise deployments.

Table 16-1 Static Artifacts to Back Up in a WebCenter Portal (11g) Enterprise Deployment

Type Host Location Tier

ORACLE HOME (DB)

RAC Database hosts - CUSTDBHOST1 and CUSTDBHOST2

The location is user-defined

Data Tier

ORACLE HOME (OHS)

WEBHOST1 and WEBHOST2

ORACLE_BASE/admin/instance_name

Web Tier

MW HOME (SOA + WC)

SOAHOST1 and SOAHOST2 - SOA

WCPHOST1 and WCPHOST2 - WC

MW_HOME on all hosts

Application Tier

ORACLE HOME (WCC)

WCPHOST1 and WCPHOST2

On shared disk: /share/oracle/wcc

On each host, local files at ORACLE_HOME/wcc

Application Tier

Installation-related files

 

OraInventory,

user_home/bea/beahomelist, oraInst.loc, oratab
 

Table 16-2 lists the runtime artifacts for back up in WebCenter Portal enterprise deployments.

Table 16-2 Run-Time Artifacts to Back Up in a WebCenter Portal (11g) Enterprise Deployment

Type Host Location Tier

DOMAIN HOME

SOAHOST1

SOAHOST2

WCPHOST1

WCPHOST2

ORACLE_BASE/admin/domain_name/ mserver/domain_name

Application Tier

Application artifacts (ear and war files)

SOAHOST1

SOAHOST2

WCPHOST1

WCPHOST2

Look at all the deployments through admin console and back up all the application artifacts

Application Tier

OHS instance home

WEBHOST1 and WEBHOST2

ORACLE_BASE/admin/instance_name

Web Tier

OHS WCC configuration files

WEBHOST1 and WEBHOST2

On each host, at /share/oracle/wcc, which is a local file system.

Web Tier

RAC databases

CUSTDBHOST1 and CUSTDBHOST2

The location is user-defined

Data Tier

Oracle WebCenter Content repository

 

Database-based

Data Tier


For more information on backup and recovery of Oracle Fusion Middleware components, see Oracle Fusion Middleware Administrator's Guide.

16.7 Preventing Timeouts for SQLNet Connections

Much of the Enterprise Deployment production deployment involves firewalls. Because database connections are made across firewalls, Oracle recommends that the firewall be configured so that the database connection is not timed out. For Oracle Real Application Clusters (Oracle RAC), the database connections are made on Oracle RAC VIPs and the database listener port. You must configure the firewall to not time out such connections. If such a configuration is not possible, set the*SQLNET.EXPIRE_TIME=n* parameter in the sqlnet.ora file, located in the following directory:

ORACLE_HOME/network/admin

The n indicates the time in minutes. Set this value to less than the known value of the timeout for the network device (that is, a firewall). For Oracle RAC, set this parameter in all of the Oracle home directories.

16.8 Troubleshooting Oracle WebCenter Portal Enterprise Deployments

This section describes possible issues with WebCenter Portal enterprise deployment and suggested solutions.

This section covers the following topics:

16.8.1 Error While Activating Changes in Administration Console

Problem: Activation of changes in Administration Console fails after changes to a server's start configuration have been performed. The Administration Console reports the following when clicking "Activate Changes":

An error occurred during activation of changes, please see the log for details.
 [Management:141190]The commit phase of the configuration update failed with an exception:
In production mode, it's not allowed to set a clear text value to the property: PasswordEncrypted of ServerStartMBean

Solution: This may happen when start parameters are changed for a server in the Administration Console. In this case, either provide username/password information in the server start configuration in the Administration Console for the specific server whose configuration was being changed, or remove the <password-encrypted></password-encrypted> entry in the config.xml file (this requires a restart of the Administration Server).

16.8.2 Redirecting of Users to Login Screen After Activating Changes in Administration Console

Problem: After configuring OHS and load balancer to access the Oracle WebLogic Administration Console, some activation changes cause the redirection to the login screen for the admin console.

Solution: This is the result of the console attempting to follow changes to port, channel, and security settings as a user makes these changes. For certain changes, the console may redirect to the Administration Server's listen address. Activation is completed regardless of the redirection. It is not required to log in again; users can simply update the URL to wcp.mycompany.com/console/console.portal and directly access the home page for the Administration Console.

Note:

This problem does not occur if you disabled tracking of the changes described in this section.

16.8.3 Redirecting of Users to Administration Console's Home Page After Activating Changes to OAM

Problem: After configuring OAM, some activation changes cause the redirection to the Administration Console's home page (instead of the context menu where the activation was performed).

Solution: This is expected when OAM SSO is configured and is the result of the redirections performed by the Administration Server. Activation is completed regardless of the redirection. If required, users may "manually" navigate again to the desired context menu.

16.8.4 WC_Spaces Server Does Not Start after Propagation of Domain

Problem: WC_Spaces server fails to start after propagation the domain configuration to SOAHOST2, WCPHOST1 and WCPHOST2 using the unpack utility:

[Deployer:149158]No application files exist at '/u01/app/oracle/admin/wcpedg_domain/apps/wcpedg_domain/custom.webcenter.spaces.fwk'... 

Solution: Copy all the files from the managed server applications location to the one expected by the managed server deployer. For example:

cp /u01/app/oracle/admin/wcpedg_domain/mserver/apps/* /u01/app/oracle/admin/wcpedg_domain/apps/wcpedg_domain/

Note:

Make sure /u01/app/oracle/admin/wcpedg_domain/apps/wcpedg_domain/ exists before copying the contents.

16.8.5 Administration Server Fails to Start After a Manual Failover

Problem: Administration Server fails to start after the Administration Server node failed and manual failover to another nodes is performed. The Administration Server output log reports the following:

<Warning> <EmbeddedLDAP> <BEA-171520> <Could not obtain an exclusive lock for directory: 
ORACLE_BASE/admin/soadomain/aserver/soadomain/servers/AdminServer/data/ldap/ldapfiles. 
Waiting for 10 seconds and then retrying in case existing WebLogic Server is still shutting down.>

Solution: When restoring a node after a node crash and using shared storage for the domain directory, you may see this error in the log for the Administration Server due to unsuccessful lock cleanup. To resolve this error, remove the file:

ORACLE_BASE/ admin/domain_name/aserver/domain_name/servers/AdminServer/data/ldap/ldapfiles/EmbeddedLDAP.lok

16.8.6 Portlet Unavailable After Database Failover

Problem: While creating a page inside the Spaces application, if you add a portlet to the page and a database failover occurs, an error component displays on the page:

"Error"
"Portlet unavailable"

This message remains even if you refresh the page or log out and back in again.

Solution: To resolve this issue, delete the component and add it again.

16.8.7 Configured JOC Port Already in Use

Problem: Attempts to start a Managed Server that uses the Java Object Cache, such as OWSM or WebCenter Portal Managed Servers, fail. The following errors appear in the logs:

J2EE JOC-058 distributed cache initialization failure
J2EE JOC-043 base exception:
J2EE JOC-803 unexpected EOF during read.

Solution: Another process is using the same port that JOC is attempting to obtain. Either stop that process, or reconfigure JOC for this cluster to use another port in the recommended port range.

16.8.8 Restoring a JMS Configuration

Problem: A mistake in the parameters passed to the soa-createUDD.py script, or some other mistake causes the JMS configuration for SOA clusters to fail.

Solution: Use soa-createUDD.py to restore the configuration.

If a mistake is made while running the soa-createUDD.py script after the SOA cluster is created from the Oracle Fusion Middleware Configuration Wizard (an incorrect option is used, a target is modified, or a module is deleted accidentally). In these situations you can use the soa-createUDD.py script to restore the appropriate JMS configuration using the following steps:

  1. Delete the existing SOA JMS resources (JMS Modules owned by the soa-infrastructure system).

  2. Run the soa-createUDD.py again. The script assume the JMS Servers created for SOA are preserved and creates the destinations and subdeployment modules required to use Uniform Distributed Destinations for SOA. In this case, the script should be executed with the option --soacluster. After running the script again, verified from the WebLogic Server Administration Console that the following artifacts exist (Domain Structure, Services, Messaging, JMS Modules):

    SOAJMSModuleUDDs        ---->SOAJMSSubDM targeted to SOAJMSServer_auto_1 and SOAJMSServer_auto_2
    UMSJMSSystemResource    ---->UMSJMSSubDMSOA targeted to UMSJMSServer_auto_1 and UMSJMSServer_auto_2
    

16.8.9 OAM Configuration Tool Does Not Remove URLs

Problem: The OAM Configuration Tool has been used and a set of URLs were added to the policies in Oracle Access Manager. One of more URLs are incorrect. Executing the OAM Configuration Tool again with the correct URLs completes successfully; however, when accessing Policy Manager, the incorrect URL is still there.

Solution: The OAM Configuration Tool only adds new URLs to existing policies when executed with the same app_domain name. To remove a URL, use the Policy Manager Console in OAM. Log on to the Access Administration site for OAM, click My Policy Domains, click the created policy domain (WCP_EDG), then the Resources tab, and remove the incorrect URLs.

16.8.10 Disabling Secondary Authentication After REST Policy Configuration

Problem: After REST policy configuration, external clients authenticating through OAM are still prompted for further authentication.

Solution: The secondary authentication is coming from WebLogic. To disable the WebLogic credential prompt, you must update the security policy:

  1. Locate the file:

    /aserver/domain_name/config/config.xml
    
  2. At the end of the security configuration section (that is, before </security-configuration>), add the line:

    <enforce-valid-basic-auth-credentials>false</enforce-valid-basic-auth-credenti als>
    
  3. Restart all the servers in the domain.

16.8.11 Sudo Error Occurs During Server Migration

Problem: When running wlsifconfig for server migration, the following warning displays:

sudo: sorry, you must have a tty to run sudo

Solution: The WebLogic user ('oracle') is not allowed to run sudo in the background. To solve this, add the following line into /etc/sudoers:

Defaults:oracle !requiretty

See also, Section 14.6, "Setting Environment and Superuser Privileges for the wlsifconfig.sh Script".