Skip Headers
Oracle® Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter Portal
11g Release 1 (11.1.1.8.3)

Part Number E12037-12
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

16 Managing the Topology for an Enterprise Deployment

This chapter describes some operations that you can perform after you have set up the topology. These operations include monitoring, scaling, and backing up your topology.

This chapter includes the following topics:

16.1 Overview of Managing Monitoring the Topology

After configuring the WebCenter Portal enterprise deployment, use the information in this chapter to manage the topology.

For information on monitoring the topology and WebCenter Portal applications, see the "Monitoring Oracle WebCenter Portal Performance" section in Oracle Fusion Middleware Administering Oracle WebCenter Portal.

At some point you may need to expand the topology by scaling it up, or out. See Section 16.4, "Scaling Up the Topology (Adding Managed Servers to Existing Nodes)" and Section 16.5, "Scaling Out the Topology (Adding Managed Servers to New Nodes)" for information about the difference between scaling up and scaling out, and instructions for performing these tasks.

Back up the topology before and after any configuration changes. Section 16.6, "Performing Backups and Recoveries in WebCenter Portal Deployments" provides information about the directories and files that should be back up to protect against failure as a result of configuration changes.

This chapter also documents solutions for possible know issues that may occur after you have configured the topology.

16.2 Managing Space in the SOA Infrastructure Database

Although not all composites may use the database frequently, the service engines generate a considerable amount of data in the CUBE_INSTANCE and MEDIATOR_INSTANCE schemas. Lack of space in the database may prevent SOA composites from functioning.

To manage space in the SOA infrastructure database:

16.3 Configuring UMS Drivers

UMS driver configuration is not automatically propagated in a SOA cluster. To propagate UMS driver configuration in a cluster:

Create the UMS driver configuration file in preparation for possible failovers by forcing a server migration, and copy the file from the source node.

It is required to restart the driver for these changes to take effect (that is, for the driver to consume the modified configuration). To restart the driver:

  1. Log on to the Oracle WebLogic Administration Console.

  2. Expand the environment node on the navigation tree.

  3. Click on Deployments.

  4. Select the driver.

  5. Click Stop->When work completes and confirm the operation.

  6. Wait for the driver to transition to the "Prepared" state (refresh the administration console page, if required).

  7. Select the driver again, and click Start->Servicing all requests and confirm the operation.

Verify in Oracle Enterprise Manager Fusion Middleware Control that the properties for the driver have been preserved.

16.4 Scaling Up the Topology (Adding Managed Servers to Existing Nodes)

When you scale up the topology, you add new managed servers to nodes that are already running one or more managed servers. You can use the existing node installations (such as WebLogic Server home, Oracle Fusion Middleware home, and domain directories), when you create the new managed servers. You do not need to install WebLogic Server, SOA or WebCenter Portal binaries at a new location or to run pack and unpack.

When you scale up a server that uses server migration, plan for your appropriate capacity and resource allocation needs. Take the following scenario for example:

In this scenario, a situation may occur where all servers (server1, server2, server3 and admin server) end up running in a node1 or node2. This means each node needs to be designed with enough resources to sustain the worst case scenario where all servers using server migration end in one single node (as defined in the server migration candidate machine configuration).

Note:

A shared domain directory for a managed server with WebCenter Content Server does not work because certain files within the domain, such as intradoc.cfg, are specific to each node. To prevent issues with node-specific files, use a local (per node) domain directory for each WebCenter Content and Inbound Refinery managed server.

This section includes the following topics:

16.4.1 Scaling up Oracle SOA (includes WSM)

To scale up the SOA topology (includes WSM):

  1. Using the Oracle WebLogic Server Administration Console, clone WLS_SOA1 or WLS_WSM1 into a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

    To clone a managed server:

    1. From the Domain Structure window of the Oracle WebLogic Server Administration Console, expand the Environment node and then Servers. The Summary of Servers page appears.

    2. Click Lock & Edit and select the managed server that you want to clone (for example, WLS_SOA1).

    3. Click Clone.

    4. Name the new managed server WLS_SOAn, where n is a number that identifies the new managed server. In this case, you are adding a new server to Node 1, where WLS_SOA1 was running.

    For the remainder of the steps, you are adding a new server to SOAHOST1, which is already running WLS_SOA1.

  2. For the listen address, assign the host name or IP to use for this new managed server. If you are planning to use server migration as recommended for this server, enter the VIP (also called a floating IP) to enable it to move to another node. The VIP should be different from the one used by the managed server that is already running.

  3. For WLS_WSM servers, run the Java Object Cache configuration utility again to include the new server in the JOC distributed cache as described in Section 8.6, "Configuring the Java Object Cache for Oracle WSM." You can use the same discover port for multiple WLS_WSM servers in the same node. Repeat the steps provided in Section 8.6, "Configuring the Java Object Cache for Oracle WSM" for each WLS_WSM server and the server list is updated.

  4. Create JMS servers for SOA and UMS on the new managed server.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new SOAJMSServer (which will be created in a later step) and name it, for example, SOAJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.

    2. Create a new JMS server for SOA: for example, SOAJMSServer_N. Use the SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N server to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMS JMS server (which will be created in a later step) and name it, for example, UMSJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    4. Create a new JMS Server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N server to the recently created managed server (WLS_SOAn).

    5. For BPM Systems only: Create a new persistence store for the new BPMJMSServer, for example, BPMJMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment":

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      This directory must exist before the managed server is started or the start operation fails.

      You can also assign SOAJMSFileStore_N as store for the new BPM JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    6. For BPM systems only: Create a new JMS Server for BPM, for example, BPMJMSServer_N. Use the BPMJMSFileStore_N for this JMSServer. Target the BPMJMSServer_N Server to the recently created Managed Server (WLS_SOAn).

    7. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Select JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    8. Update the SubDeployment Targets for SOA, UMS and BPM JMS Modules (if applicable) to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Select JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click the JMS module (for SOA: SOAJMSModule, for BPM: BPMJMSMOdule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N). Click Save and Activate.

  5. Configuring Oracle Coherence for deploying composites for the new server as described in Section 9.4, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field must be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1VHNn

  6. Reconfigure the JMS Adapter with the new server using the FactoryProperties field in the Administration Console. Click on the corresponding cell under the Property value and enter the following:

    java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory;java.naming.provider.url=t3://soahostvhn1:8001,soahos2tvhn1:8001;java.naming.security.principal=weblogic;java.naming.security.credentials=myPassword1
    

    Click Save and Activate.

  7. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment".

    From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  8. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAn. managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  9. Configure server migration for the new managed server. To configure server migration using the Oracle WebLogic Server Administration Console:

    Note:

    Because this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. The floating IP for the new SOA managed server should also be already present.

    1. In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page appears.

    2. Click the name of the server (represented as a hyperlink) in Name column of the table for which you want to configure migration. The settings page for the selected server appears.

    3. Click the Migration subtab.

    4. In the Migration Configuration section, select the servers that participate in migration in the Available window by clicking the right arrow. Select the same migration targets as for the servers that already exist on the node.

      For example, for new managed servers on SOAHOST1, which is already running WLS_SOA1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_SOA2, select SOAHOST1.

      Note:

      The appropriate resources must be available to run the managed servers concurrently during migration.

    5. Select the Automatic Server Migration Enabled option. This enables the Node Manager to start a failed server on the target node automatically.

    6. Click Save.

    7. Restart the Administration Server, managed servers, and Node Manager.

      To restart the Administration Server, use the procedure in Section 8.4.3, "Starting the Administration Server on SOAHOST1."

  10. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example: SOAHOST1VHN1:8011,SOAHOST2VHN1:8011,SOAHOST1VHN1:8001

    5. Save and activate the changes.

  11. Test server migration for this new server. To test migration, perform the following from the node where you added the new server:

    1. Stop the WLS_SOAn managed server.

      To do this, run kill -9 <pid> on the PID of the managed server. You can identify the PID of the node using ps -ef | grep WLS_SOAn.

    2. Monitor the Node Manager Console for a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to attempt a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. Node Manager logs a message indicating that the server will not be restarted again locally.

16.4.2 Scaling Up Oracle WebCenter Portal

To scale up the WebCenter Portal topology:

Note:

Running multiple managed servers on one node is only supported for WC_Spaces and WC_Portlet servers.

  1. Using the WebLogic Server Administration Console, clone WC_Spaces1 or WC_Portlet1 into a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

    To clone a managed server:

    1. In the Administration Console, select Environment, and then Servers.

    2. Click Lock & Edit.

    3. Select the managed server that you want to clone, for example, WC_Spaces1 or WC_Portlet1.

    4. Select Clone.

    5. Name the new managed server WC_SERVERNAMEn, where n is a number to identify the new managed server.

    For the remainder of the steps, you add the new server to WCPHOST1, which is already running WC_Spaces1 or WC_Portlet1.

  2. For the listen address, assign the host name or IP to use for this new managed server, which should be the same as an existing server.

    Ensure that the port number for this managed server is available on this node.

  3. Add the new managed server to the Java Object Cache Cluster. For details, see Section 10.5, "Configuring the Java Object Cache for Spaces_Cluster."

  4. Reconfigure the Oracle HTTP Server module with the new member in the cluster. For more information see Section 10.11.1, "Configuring Oracle HTTP Server for the WC_Spacesn, WC_Portletn, WC_Utilitiesn, and WC_Collaborationn Managed Servers." Add the host and port of the new server to the end of the WebLogicCluster parameter.

    • For WC_Spaces, add the member to the Location blocks for /webcenter, /webcenterhelp, /rss, /rest.

    • For WC_Portlet, add the member to the Location blocks for /portalTools, /wsrp-tools, /pagelets.

16.4.3 Scaling Up Oracle WebCenter Content

Only one Oracle WebCenter Content managed server per node per domain is supported by Oracle Fusion Middleware. To add additional Oracle WebCenter Content managed servers, follow the steps in the "Scale-Out Procedure for WebCenter Content" section in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter Content to add an Oracle WebCenter Content managed server to a new node.

16.5 Scaling Out the Topology (Adding Managed Servers to New Nodes)

When you scale out the topology, you add new managed servers configured with SOA, WebCenter Portal, or WebCenter Content to new nodes.

This section includes the following topics:

16.5.1 Scaling Out Oracle SOA (includes WSM)

When you scale out the topology, you add new managed servers configured with SOA and or WSM-PM to new nodes.

Before performing the steps in this section, check that you meet these requirements:

Prerequisites

  • There must be existing nodes running managed servers configured with SOA and WSM-PM within the topology

  • The new node can access the existing home directories for WebLogic Server and SOA. (Use the existing installations in shared storage for creating a new WLS_SOA or WLS_WSM managed server. You do not need to install WebLogic Server or SOA binaries in a new location but you do need to run pack and unpack to bootstrap the domain configuration in the new node.)

  • When an ORACLE_HOME or WL_HOME is shared by multiple servers in different nodes, keep the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and "attach" an installation in a shared storage to it, use the attachHome.sh script in the following location:

    ORACLE_HOME/oui/bin/
    

    To update the Middleware home list to add or remove a WL_HOME, edit the beahomelist file located in the following directory:

    user_home/bea/
    

To scale out the topology:

  1. On the new node, mount the existing MW_Home, which should include the SOA installation and the domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

  2. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following command from SOAHOSTn:

    cd ORACLE_COMMON_HOME/oui/bin/attachHome.sh
    ./attachHome.sh -jreLoc ORACLE_BASE/fmw/jrockit_160_version
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the $HOME/bea/beahomelist file and add MW_HOME to it.

    Note:

    The examples documented in this guide use JRockit. Any certified version of Java can be used for this procedure and is fully supported unless otherwise noted.

  3. Log in to the Oracle WebLogic Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

  5. Update the machine's Node Manager's address to map the IP of the node that is being used for scale out.

  6. Use the Oracle WebLogic Server Administration Console to clone WLS_SOA1/WLS_WSM1 into a new managed server. Name it WLS_SOAn/WLS_WLS_WSMn, where n is a number.

    Note:

    These steps assume that you are adding a new server to node n, where no managed server was running previously.

  7. Assign the host name or IP to use for the new managed server for the listen address of the managed server.

    If you are planning to use server migration for this server (which Oracle recommends) this should be the VIP (also called a floating IP) for the server. This VIP should be different from the one used for the existing managed server.

  8. For WLS_WSM servers, run the Java Object Cache configuration utility again to include the new server in the JOC distributed cache as described in Section 8.6, "Configuring the Java Object Cache for Oracle WSM."

  9. Create JMS Servers for SOA, BPM, (if applicable) and UMS on the new managed server.

    Note:

    These steps are not required for scaling out the WLS_WSM managed server, only for WLS_SOA managed servers.

    Create the JMS servers for SOA and UMS as follows:

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new SOAJMSServer (which will be created in a later step) and name it, for example, SOAJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.

    2. Create a new JMS server for SOA, for example, SOAJMSServer_N. Use the SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N Server to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMSJMSServer, and name it, for example, UMSJMSFileStore_N. As the directory for the persistent store, specify the path recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      This directory must exist before the managed server is started or the start operation will fail.

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    4. Create a new JMS server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N Server to the recently created managed server (WLS_SOAn).

    5. For BPM Systems only: Create a new persistence store for the new BPMJMSServer, for example, BPMJMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment".

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      This directory must exist before the managed server is started or the start operation fails.

      You can also assign SOAJMSFileStore_N as store for the new BPM JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    6. For BPM systems only: Create a new JMS Server for BPM, for example, BPMJMSServer_N. Use the BPMJMSFileStore_N for this JMSServer. Target the BPMJMSServer_N Server to the recently created Managed Server (WLS_SOAn).

    7. Update the SubDeployment Targets for SOA, UMS and BPM JMS Modules (if applicable) to include the recently created JMS servers. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule, for BPM: BPMJMSModule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for the module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N). Click Save and Activate.

    8. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

  10. Run the pack command on SOAHOST1 to create a template pack as follows:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=ORACLE_BASE/admin/domain_name/aserver/domain_name
    -template=soadomaintemplateScale.jar -template_name=soa_domain_templateScale
    

    Run the following command on SOAHOST1 to copy the template file created to SOAHOSTN

    SOAHOST1> scp soadomaintemplateScale.jar oracle@SOAHOSTN:/ ORACLE_COMMON_HOME/common/bin
    

    Run the unpack command on SOAHOSTn to unpack the template in the managed server domain directory as follows:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./unpack.sh -domain=ORACLE_BASE/admin/domain_name
    /mserver/domain_name/ -template=soadomaintemplateScale.jar
    -app_dir=ORACLE_BASE/admin/domain_name/mserver/applications -overwrite_domain=true
    
    
  11. Configure Oracle Coherence for deploying composites for the new server as described in Section 9.4, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field needs to be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1VHNn

  12. Reconfigure the JMS Adapter with the new server using the FactoryProperties field in the Administration Console. Click on the corresponding cell under the Property value and enter the following:

    java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory;java.naming.provider.url=t3://soahostvhn1:8001,soahos2tvhn1:8001;java.naming.security.principal=weblogic;java.naming.security.credentials=myPassword1
    

    Click Save and Activate.

  13. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment".

    From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  14. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  15. Start Node Manager on the new node. To start Node Manager, use the installation in shared storage from the existing nodes, and start Node Manager by passing the host name of the new node as a parameter as follows:

    SOAHOSTN> WL_HOME/server/bin/startNodeManager
    
  16. Start and test the new managed server from the Oracle WebLogic Server Administration Console.

    1. Ensure that the newly created managed server, WLS_SOAn, is running.

    2. Access the application on the load balancer (https://soa.mycompany.com/soa-infra). The application should be functional.

      Note:

      The HTTP Servers in the topology should round robin requests to the newly added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in the Oracle HTTP Server *_vh.conf files. However, routing to new servers in the cluster takes place only if at least one of the servers listed in the WebLogicCluster directive is running.

  17. Configure server migration for the new managed server.

    Note:

    Because this new node uses an existing shared storage installation, the node already is using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges. The floating IP for the new SOA Managed Server is already present in the new node.

    Log into the Oracle WebLogic Server Administration Console and configure server migration:

    1. Expand the Environment node in the Domain Structure windows and then select Servers. The Summary of Servers page appears.

    2. Select the server (represented as hyperlink) for which you want to configure migration from the Names column of the table. The Setting page for that server appears.

    3. Click the Migration tab.

    4. In the Available field of the Migration Configuration section, click the right arrow to select the machines to which to allow migration.

      Note:

      Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional managed server.

    5. Select Automatic Server Migration Enabled. This enables the Node Manager to start a failed server on the target node automatically.

    6. Click Save.

    7. Restart the Administration Server, managed servers, and the Node Manager.

      To restart the Administration Server, use the procedure in Section 8.4.3, "Starting the Administration Server on SOAHOST1."

  18. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example: ADMINVHN:8011,SOAHOST2VHN1:8011,SOAHOSTNVHN1:8001

    5. Save and activate the changes.

  19. Test server migration for this new server from the node where you added the new server:

    1. Stop the WLS_SOAn managed server by running the following command on the PID (process ID) of the managed server:

      kill -9 pid
      

      You can identify the PID of the node using the following command:

      ps -ef | grep WLS_SOAn
      

      Note:

      For Windows, you can terminate the Managed Server using the taskkill command. For example:

      taskkill /f /pid pid
      

      Where pid is the process ID of the Managed Server.

      To determine the process ID of the WLS_SOAn Managed Server, run the following command:

      MW_HOME\jrockit_160_20_D1.0.1-2124\bin\jps -l -v
      
    2. In the Node Manager Console you should see a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. Now Node Manager should log a message indicating that the server will not be restarted again locally.

16.5.2 Scaling Out Oracle WebCenter Portal

In scaling out your topology, you add new managed servers, configured with Oracle WebCenter Portal applications, to new nodes.

Before performing the steps in this section, check that you meet these requirements:

Prerequisites

  • There must be existing nodes running managed servers configured with WebCenter Portal within the topology.

  • The new node can access the existing home directories for WebLogic Server and Oracle WebCenter Portal. You use the existing installations in shared storage for creating a new managed server. There is no need to install WebLogic Server or WebCenter Portal binaries in a new location, although you need to run pack and unpack to create a managed server domain.

  • Both WC_Spaces and WC_Utilities servers must be scaled out or not scaled out on the new node. This is because of the local affinity between the WebCenter Portal application and the analytics application.

To scale out the topology:

  1. On the new node, mount the existing Middleware home, which should include the WebCenter Portal installation and the domain directory, and ensure that the new node has access to this directory, just as the rest of the nodes in the domain do.

  2. Attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following commands:

    WCPHOSTn> cd ORACLE_BASE/product/fmw/wc/
    WCPHOSTn> ./attachHome.sh -jreLoc ORACLE_BASE/fmw/jrockit_160_verion
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the MW_HOME/bea/beahomelist file and add ORACLE_BASE/product/fmw to it.

    Note:

    The examples documented in this guide use JRockit. Any certified version of Java can be used for this procedure and is fully supported unless otherwise noted.

  3. Log in to the Oracle WebLogic Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

  5. Update the machine's Node Manager's address to map the IP address of the node that is being used for scale out.

  6. Use the Oracle WebLogic Server Administration Console to clone either WC_Spaces1 or WC_Portlet1 or WC_Collaboration1 or WC_Utilities1 into a new managed server. Name it WC_XXXn, where n is a number and assign it to the new machine.

  7. For the listen address, assign the host name or IP to use for the new managed server. Perform these steps to set the managed server listen address:

    1. Log into the Oracle WebLogic Server Administration Console.

    2. In the Change Center, click Lock & Edit.

    3. Expand the Environment node in the Domain Structure window.

    4. Click Servers. The Summary of Servers page appears.

    5. Select the managed server with the listen address you want to update in the Names column of the table. The Setting page for that managed server appears.

    6. Set the Listen Address to WCPHOSTn where WCPHOSTn is the DNS name of your new machine.

    7. Click Save.

    8. Save and activate the changes.

      The changes do not take effect until the managed server is restarted.

  8. Run the pack command on SOAHOST1 to create a template pack and unpack onto WCPHOSTn.

    These steps are documented in Section 10.4.1, "Propagating the Domain Configuration to SOAHOST2, WCPHOST1, and WCPHOST2 Using the unpack Utility."

  9. Start the Node Manager on the new node. To start the Node Manager, use the installation in shared storage from the existing nodes, and start Node Manager by passing the host name of the new node as a parameter as follows:

    WCPHOSTn> WL_HOME/server/bin/startNodeManager new_node_ip
    
  10. If this is a new Collaboration managed server:

    1. Ensure that you have followed the steps in Section 10.7, "Configuring Clustering on the Discussions Server," to configure clustering for the new Discussions Server.

    2. Ensure also that the steps in Section 10.6, "Converting Discussions from Multicast to Unicast" are performed, using the hostname of the new host for the coherence.localhost parameter.

  11. If this is a new Utilities managed server, ensure that Activity Graph is disabled by following the steps in Section 10.9, "Configuring Activity Graph." Ensure also that the steps for configuring a new Analytics Collector in Section 10.8, "Configuring Analytics" have been followed for the Utilities and the local Spaces Server.

  12. Start and test the new managed server from the Oracle WebLogic Server Administration Console:

    1. Ensure that the newly created managed server, WLS_SOAn, is running.

    2. Access the application on the load balancer (https://soa.mycompany.com/soa-infra). The application should be functional.

      Note:

      The HTTP Servers in the topology should round robin requests to the newly added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in the Oracle HTTP Server *_vh.conf files. However, routing to new servers in the cluster takes place only if at least one of the servers listed in the WebLogicCluster directive is running.

16.5.3 Scaling Out Oracle WebCenter Content

To add additional Oracle WebCenter Content managed servers, follow the steps in the "Scale-Out Procedure for WebCenter Content" section in Oracle Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter Content to add an Oracle WebCenter Content managed server to a new node.

16.6 Performing Backups and Recoveries in WebCenter Portal Deployments

Table 16-1 lists the static artifacts to back up in WebCenter Portal enterprise deployments.

Table 16-1 Static Artifacts to Back Up in a WebCenter Portal (11g) Enterprise Deployment

Type Host Location Tier

ORACLE HOME (DB)

RAC Database hosts - CUSTDBHOST1 and CUSTDBHOST2

The location is user-defined

Data Tier

ORACLE HOME (OHS)

WEBHOST1 and WEBHOST2

ORACLE_BASE/admin/instance_name

Web Tier

MW HOME (SOA + WC)

SOAHOST1 and SOAHOST2 - SOA

WCPHOST1 and WCPHOST2 - WC

MW_HOME on all hosts

Application Tier

ORACLE HOME (WCC)

WCPHOST1 and WCPHOST2

On shared disk: /share/oracle/wcc

On each host, local files at ORACLE_HOME/wcc

Application Tier

Installation-related files

 

OraInventory,

user_home/bea/beahomelist, oraInst.loc, oratab
 

Table 16-2 lists the runtime artifacts for back up in WebCenter Portal enterprise deployments.

Table 16-2 Run-Time Artifacts to Back Up in a WebCenter Portal (11g) Enterprise Deployment

Type Host Location Tier

DOMAIN HOME

SOAHOST1

SOAHOST2

WCPHOST1

WCPHOST2

ORACLE_BASE/admin/domain_name/ mserver/domain_name

Application Tier

Application artifacts (ear and war files)

SOAHOST1

SOAHOST2

WCPHOST1

WCPHOST2

Look at all the deployments through admin console and back up all the application artifacts

Application Tier

OHS instance home

WEBHOST1 and WEBHOST2

ORACLE_BASE/admin/instance_name

Web Tier

OHS WCC configuration files

WEBHOST1 and WEBHOST2

On each host, at /share/oracle/wcc, which is a local file system.

Web Tier

RAC databases

CUSTDBHOST1 and CUSTDBHOST2

The location is user-defined

Data Tier

Oracle WebCenter Content repository

 

Database-based

Data Tier


For more information on backup and recovery of Oracle Fusion Middleware components, see Oracle Fusion Middleware Administrator's Guide.

16.7 Verifying Manual Failover of the Administration Server

In case a node fails, you can fail over the Administration Server to another node. The following sections provide the steps to verify the failover and failback of the Administration Server from SOAHOST1 and SOAHOST2.

Assumptions:

This section contains the following topics:

16.7.1 Failing Over the Administration Server to a Different Node

The following procedure shows how to fail over the Administration Server to a different node (SOAHOST2), but the Administration Server will still use the same WebLogic Server machine (which is a logical machine, not a physical machine).

To fail over the Administration Server to a different node:

  1. Stop the Administration Server.

  2. Migrate IP to the second node.

    1. Run the following command as root on SOAHOST1 (where X:Y is the current interface used by ADMINVHN):

      /sbin/ifconfig ethX:Y down
      
    2. Run the following command on SOAHOST2:

      /sbin/ifconfig interface:index IP_Address netmask netmask
      

      For example:

      /sbin/ifconfig eth0:1 10.0.0.1 netmask 255.255.255.0
      

      Note:

      Ensure that the netmask and interface to be used to match the available network configuration in SOAHOST2.

  3. Update routing tables through arping. Run the following command from SOAHOST1:

    /sbin/arping -q -U -c 3 -I eth0 10.0.0.1
    
  4. Start the Administration Server on SOAHOST2 using the procedure in Section 8.4.3, "Starting the Administration Server on SOAHOST1."

  5. Test that you can access the Administration Server on SOAHOST2 as follows:

    1. Ensure that you can access the Oracle WebLogic Server Administration Console using the following URL:

      http://ADMINVHN:7001/console
      
    2. Check that you can access and verify the status of components in the Oracle Enterprise Manager using the following URL:

      http://ADMINVHN:7001/em
      

      Note:

      The Administration Server does not use Node Manager for failing over. After a manual failover, the machine name that appears in the Current Machine field in the Administration Console for the server is SOAHOST1, and not the failover machine, SOAHOST2. Since Node Manager does not monitor the Administration Server, the machine name that appears in the Current Machine field, is not relevant and you can ignored it.

16.7.2 Validating Access to SOAHOST2 Through Oracle HTTP Server

Perform the same steps as in Section 8.7.5, "Validating Access Through Oracle HTTP Server". This is to check that you can access the Administration Server when it is running on SOAHOST2.

16.7.3 Failing the Administration Server Back to SOAHOST1

This step checks that you can fail back the Administration Server, that is, stop it on SOAHOST2 and run it on SOAHOST1 by migrating ADMINVHN back to SOAHOST1 node.

To migrate ADMINVHN back to SOAHOST1:

  1. Make sure the Administration Server is not running.

  2. Run the following command on SOAHOST2.

    /sbin/ifconfig ethZ:N down
    
  3. Run the following command on SOAHOST1:

    /sbin/ifconfig ethX:Y 100.200.140.206 netmask 255.255.255.0
    

    Note:

    Ensure that the netmask and interface to be used match the available network configuration in SOAHOST1

  4. Update routing tables through arping. Run the following command from SOAHOST1.

    /sbin/arping -q -U -c 3 -I ethZ 100.200.140.206
    
  5. Start the Administration Server again on SOAHOST1 using the procedure in Section 8.4.3, "Starting the Administration Server on SOAHOST1."

    cd ORACLE_BASE/admin/domain_name/aserver/domain_name/bin
    ./startWebLogic.sh
     
    
  6. Test that you can access the Oracle WebLogic Server Administration Console using the following URL:

    http://ADMINVHN:7001/console
    
  7. Check that you can access and verify the status of components in the Oracle Enterprise Manager using the following URL:

    http://ADMINVHN:7001/em
    

16.8 Preventing Timeouts for SQLNet Connections

Much of the Enterprise Deployment production deployment involves firewalls. Because database connections are made across firewalls, Oracle recommends that the firewall be configured so that the database connection is not timed out. For Oracle Real Application Clusters (Oracle RAC), the database connections are made on Oracle RAC VIPs and the database listener port. You must configure the firewall to not time out such connections. If such a configuration is not possible, set the*SQLNET.EXPIRE_TIME=n* parameter in the sqlnet.ora file, located in the following directory:

ORACLE_HOME/network/admin

The n indicates the time in minutes. Set this value to less than the known value of the timeout for the network device (that is, a firewall). For Oracle RAC, set this parameter in all of the Oracle home directories.

16.9 Troubleshooting Oracle WebCenter Portal Enterprise Deployments

This section describes possible issues with WebCenter Portal enterprise deployment and suggested solutions.

This section covers the following topics:

16.9.1 Error While Activating Changes in Administration Console

Problem: Activation of changes in Administration Console fails after changes to a server's start configuration have been performed. The Administration Console reports the following when clicking "Activate Changes":

An error occurred during activation of changes, please see the log for details.
 [Management:141190]The commit phase of the configuration update failed with an exception:
In production mode, it's not allowed to set a clear text value to the property: PasswordEncrypted of ServerStartMBean

Solution: This may happen when start parameters are changed for a server in the Administration Console. In this case, either provide username/password information in the server start configuration in the Administration Console for the specific server whose configuration was being changed, or remove the <password-encrypted></password-encrypted> entry in the config.xml file (this requires a restart of the Administration Server).

16.9.2 Redirecting of Users to Login Screen After Activating Changes in Administration Console

Problem: After configuring OHS and load balancer to access the Oracle WebLogic Administration Console, some activation changes cause the redirection to the login screen for the admin console.

Solution: This is the result of the console attempting to follow changes to port, channel, and security settings as a user makes these changes. For certain changes, the console may redirect to the Administration Server's listen address. Activation is completed regardless of the redirection. It is not required to log in again; users can simply update the URL to wcp.mycompany.com/console/console.portal and directly access the home page for the Administration Console.

Note:

This problem does not occur if you disabled tracking of the changes described in this section.

16.9.3 Redirecting of Users to Administration Console's Home Page After Activating Changes to OAM

Problem: After configuring OAM, some activation changes cause the redirection to the Administration Console's home page (instead of the context menu where the activation was performed).

Solution: This is expected when OAM SSO is configured and is the result of the redirections performed by the Administration Server. Activation is completed regardless of the redirection. If required, users may "manually" navigate again to the desired context menu.

16.9.4 WC_Spaces Server Does Not Start after Propagation of Domain

Problem: WC_Spaces server fails to start after propagating the domain configuration to SOAHOST2, WCPHOST1 and WCPHOST2 using the unpack utility:

[Deployer:149158]No application files exist at '/u01/app/oracle/admin/wcpedg_domain/aserver/wcpedg_domain/custom.webcenter.spaces.fwk'... 

Solution: Paths in config.xml created by the Admin server need to be available on every machine, regardless of whether the Admin server is mounted or dismounted. This is accomplished by copying the missing folders/files from the primary domain to the new node's local file system.

  1. On each machine, copy WebCenter Portal files from shared storage to local storage.

    With Admin server mounted, this can be done to a temporary directory as follows:

    $ mkdir /tmp/wcptemp
    $ cp ORACLE_BASE/admin/domain_name/aserver/applications/domain_name/* /tmp/wcptemp
    
  2. On each machine with the Admin server dismounted, create a local directory and move these files into it:

    $ mkdir -p ORACLE_BASE/admin/domain_name/aserver/applications/domain_name
    $ mv /tmp/webcentertemp/* ORACLE_BASE/admin/domain_name/aserver/applications/domain_name
    
    

All the Managed Servers should startup fine.

16.9.5 Administration Server Fails to Start After a Manual Failover

Problem: Administration Server fails to start after the Administration Server node failed and manual failover to another nodes is performed. The Administration Server output log reports the following:

<Warning> <EmbeddedLDAP> <BEA-171520> <Could not obtain an exclusive lock for directory: 
ORACLE_BASE/admin/soadomain/aserver/soadomain/servers/AdminServer/data/ldap/ldapfiles. 
Waiting for 10 seconds and then retrying in case existing WebLogic Server is still shutting down.>

Solution: When restoring a node after a node crash and using shared storage for the domain directory, you may see this error in the log for the Administration Server due to unsuccessful lock cleanup. To resolve this error, remove the file:

ORACLE_BASE/ admin/domain_name/aserver/domain_name/servers/AdminServer/data/ldap/ldapfiles/EmbeddedLDAP.lok

16.9.6 Portlet Unavailable After Database Failover

Problem: While creating a page in WebCenter Portal, if you add a portlet to the page and a database failover occurs, an error component displays on the page:

"Error"
"Portlet unavailable"

This message remains even if you refresh the page or log out and log back in again.

Solution: To resolve this issue, delete the component and add it again.

16.9.7 Configured JOC Port Already in Use

Problem: Attempts to start a Managed Server that uses the Java Object Cache, such as OWSM or WebCenter Portal Managed Servers, fail. The following errors appear in the logs:

J2EE JOC-058 distributed cache initialization failure
J2EE JOC-043 base exception:
J2EE JOC-803 unexpected EOF during read.

Solution: Another process is using the same port that JOC is attempting to obtain. Either stop that process, or reconfigure JOC for this cluster to use another port in the recommended port range.

16.9.8 Restoring a JMS Configuration

Problem: A mistake in the parameters passed to the soa-createUDD.py script, or some other mistake causes the JMS configuration for SOA clusters to fail.

Solution: Use soa-createUDD.py to restore the configuration.

If a mistake is made while running the soa-createUDD.py script after the SOA cluster is created from the Oracle Fusion Middleware Configuration Wizard (an incorrect option is used, a target is modified, or a module is deleted accidentally). In these situations you can use the soa-createUDD.py script to restore the appropriate JMS configuration using the following steps:

  1. Delete the existing SOA JMS resources (JMS Modules owned by the soa-infrastructure system).

  2. Run the soa-createUDD.py again. The script assume the JMS Servers created for SOA are preserved and creates the destinations and subdeployment modules required to use Uniform Distributed Destinations for SOA. In this case, the script should be executed with the option --soacluster. After running the script again, verified from the WebLogic Server Administration Console that the following artifacts exist (Domain Structure, Services, Messaging, JMS Modules):

    SOAJMSModuleUDDs        ---->SOAJMSSubDM targeted to SOAJMSServer_auto_1 and SOAJMSServer_auto_2
    UMSJMSSystemResource    ---->UMSJMSSubDMSOA targeted to UMSJMSServer_auto_1 and UMSJMSServer_auto_2
    

16.9.9 OAM Configuration Tool Does Not Remove URLs

Problem: The OAM Configuration Tool has been used and a set of URLs were added to the policies in Oracle Access Manager. One of more URLs are incorrect. Executing the OAM Configuration Tool again with the correct URLs completes successfully; however, when accessing Policy Manager, the incorrect URL is still there.

Solution: The OAM Configuration Tool only adds new URLs to existing policies when executed with the same app_domain name. To remove a URL, use the Policy Manager Console in OAM. Log on to the Access Administration site for OAM, click My Policy Domains, click the created policy domain (WebCenter_EDG), then the Resources tab, and remove the incorrect URLs.

16.9.10 Disabling Secondary Authentication After REST Policy Configuration

Problem: After REST policy configuration, external clients authenticating through OAM are still prompted for further authentication.

Solution: The secondary authentication is coming from WebLogic. To disable the WebLogic credential prompt, you must update the security policy:

  1. Locate the file:

    /aserver/domain_name/config/config.xml
    
  2. At the end of the security configuration section (that is, before </security-configuration>), add the line:

    <enforce-valid-basic-auth-credentials>false</enforce-valid-basic-auth-credenti als>
    
  3. Restart all the servers in the domain.

16.9.11 Sudo Error Occurs During Server Migration

Problem: When running wlsifconfig for server migration, the following warning displays:

sudo: sorry, you must have a tty to run sudo

Solution: The WebLogic user ('oracle') is not allowed to run sudo in the background. To solve this, add the following line into /etc/sudoers:

Defaults:oracle !requiretty

See also, Section 14.5, "Setting Environment and Superuser Privileges for the wlsifconfig.sh Script".

16.9.12 Transaction Timeout Error

Problem: The following transaction timeout error appears in the log:

Internal Exception: java.sql.SQLException: Unexpected exception while enlisting
 XAConnection java.sql.SQLException: XA error: XAResource.XAER_NOTA start()
failed on resource 'SOADataSource_wcpedg_domain': XAER_NOTA : The XID
is not valid

Solution: Check your transaction timeout settings, and be sure that the JTA transaction time out is less than the DataSource XA Transaction Timeout, which is less than the distributed_lock_timeout (at the database).

With the out of the box configuration, the SOA data sources do not set XA timeout to any value. The Set XA Transaction Timeout configuration parameter is unchecked in the WebLogic Server Administration Console. In this case, the data sources use the domain level JTA timeout which is set to 30. Also, the default distributed_lock_timeout value for the database is 60. As a result, the SOA configuration works correctly for any system where transactions are expected to have lower life expectancy than such values. Adjust these values according to the transaction times your specific operations are expected to take.

16.9.13 Exceeded Maximum Size Error Messages

Problem: When complex rules are edited and saved in a SOA cluster, error messages reporting exceeded maximum size in replication messages may show up in the server's out file. For example:

<rws3211539-v2.company.com> <WLS_SOA1> <ExecuteThread: '2' for queue:
'weblogic.socket.Muxer'> <<WLS Kernel>> <> <> <1326464549135> <BEA-000403>
<IOException occurred on socket:
Socket[addr=/10.10.10.10,port=48290,localport=8001]
weblogic.socket.MaxMessageSizeExceededException: Incoming message of size:
'10000080' bytes exceeds the configured maximum of: '10000000' bytes for
protocol: 't3'.
weblogic.socket.MaxMessageSizeExceededException: Incoming message of size:
'10000080' bytes exceeds the configured maximum of: '10000000' bytes for
protocol: 't3'

Solution: This error is due to the large size of serialized rules placed in the HTTP Sessions and replicated in the cluster. Increase the maximum message size according the size of the rules being used.

To increase the maximum message size:

  1. Log in to the WebLogic Administration Console.

  2. Select Servers, Server_name, Protocols, and then General.

  3. Modify the Maximum Message Size field as needed.