Skip Headers
Oracle® Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite
11g Release 1 (11.1.1)

Part Number E12036-08
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

16 Managing the Topology for an Enterprise Deployment

This chapter describes some operations that you can perform after you have set up the topology. These operations include monitoring, scaling, and backing up your topology.

This chapter contains the following sections:

16.1 Overview of Managing the Topology

After configuring the SOA enterprise deployment, use the information in this chapter to manage the topology.

SOA applications are deployed as composites, consisting of different kinds of components. SOA composite applications include the following:

These components are assembled into a single SOA composite application. This chapter offers tips for deploying SOA composite applications.

For information on monitoring SOA composite applications, see Monitoring SOA Composite Applications in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suit.

For information on managing SOA composite applications, see Managing SOA Composite Applications in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suit.

At some point you may need to expand the topology by scaling it up, or out. See Section 16.5, "Scaling Up the Topology (Adding Managed Servers to Existing Nodes)," and Section 16.6, "Scaling Out the Topology (Adding Managed Servers to New Nodes)" for information about the difference between scaling up and scaling out, and instructions for performing these tasks.

Back up the topology before and after any configuration changes. Section 16.7, "Performing Backups and Recoveries in the SOA Enterprise Deployments" provides information about the directories and files that should be back up to protect against failure as a result of configuration changes.

This chapter also documents solutions for possible known issues that may occur after you have configured the topology.

16.2 Tips for Deploying Composites and Artifacts in a SOA Enterprise Deployment Topology

This section describes tips for deploying composites and artifacts for a SOA enterprise deployment. See the "Deploying SOA Composite Applications" chapter in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suit for instructions on deploying composites.

Deploy composites to a specific server address

When deploying SOA composites to a SOA enterprise deployment topology, deploy to a specific server's address and not to the load balancer address (soa.mycompany.com). Deploying to the load balancer address may require direct connection from the deployer nodes to the external load balancer address which may require additional ports to be opened in the firewalls used by the system.

Use B2B Console for deployment agreements and purge/import metadata

For B2B, deploy agreements and purge/import metadata ONLY from the GUI available in B2B console. Do not use the command line utility. Using the command line utility for these operations may cause inconsistencies and errors in the B2B system.

Additional instructions for FOD deployment

If you are deploying the SOA Fusion Order Demo, complete the deployment steps provided in the FOD's README file, and then complete the following additional steps:

  1. Change the nostage property to false in the build.xml file of the Web applications so that ear files are copied to each node. Edit the CreditCardAuthorization and OrderApprvalHumanTask build.xml files, located at FOD_dir\CreditCardAuthorization\bin and FOD_dir\OrderApprovalHumanTask\bin directories, and change the following field:

    <target name="deploy-application">
         <wldeploy action="deploy" name="${war.name}"
           source="${deploy.ear.source}" library="false"
           nostage="false"
           user="${wls.user}" password="${wls.password}"
           verbose="false" adminurl="${wls.url}"
           remote="true" upload="true"
           targets="${server.targets}" />
       </target>
    

    To:

    <target name="deploy-application">
         <wldeploy action="deploy" name="${war.name}"
           source="${deploy.ear.source}" library="false"
           nostage="true"
           user="${wls.user}" password="${wls.password}"
           verbose="false" adminurl="${wls.url}"
           remote="true" upload="true"
           targets="${server.targets}" />
       </target>
    
  2. Change the target for the Web applications so that deployments are targeted to the SOA Cluster and not to an individual server. Edit the build.properties file for FOD, located in the FOD_Dir/bin directory, and change the following field:

    # wls target server (for shiphome set to server_soa, for ADRS use AdminServer) 
    server.targets=SOA_Cluster (the SOA cluster name in your SOA EDG)
    
  3. Change the JMS seed templates so that instead of regular Destinations, Uniform Distributed Destinations are used and the JMS artifacts are targeted to the Enterprise Deployment JMS Modules. Edit the createJMSResources.seed file, located in the FOD_DIR\bin\templates directory, and change:

    # lookup the SOAJMSModule - it's a system resource
         jmsSOASystemResource = lookup("SOAJMSModule","JMSSystemResource")
    
         jmsResource = jmsSOASystemResource.getJMSResource()
        
         cfbean = jmsResource.lookupConnectionFactory('DemoSupplierTopicCF')
         if cfbean is None:
             print "Creating DemoSupplierTopicCF connection factory"
             demoConnectionFactory =
     jmsResource.createConnectionFactory('DemoSupplierTopicCF')
             demoConnectionFactory.setJNDIName('jms/DemoSupplierTopicCF')
             demoConnectionFactory.setSubDeploymentName('SOASubDeployment')
     .
         topicbean = jmsResource.lookupTopic('DemoSupplierTopic')
         if topicbean is None:
             print "Creating DemoSupplierTopic jms topic"
             demoJMSTopic = jmsResource.createTopic("DemoSupplierTopic")
             demoJMSTopic.setJNDIName('jms/DemoSupplierTopic')
             demoJMSTopic.setSubDeploymentName('SOASubDeployment')
    

    To:

    jmsSOASystemResource = lookup("SOAJMSModule","JMSSystemResource")
    
    jmsResource = jmsSOASystemResource.getJMSResource()
    
     topicbean=jmsResource.lookupTopic('DemoSupplierTopic_UDD')
    
    if topicbean is None: 
             print "Creating DemoSupplierTopicC jms topic"
             #create a udd - so clustering is automatically working and done
             demoJMSTopic = jmsResource.createUniformDistributedTopic("DemoSupplierTopic_UDD")
    
             demoJMSTopic.setJNDIName('@jms.topic.jndi@')
             #Replace the subdeployment name with the one that appears in the WLS AdminConsole as listed for the SOAJMSModule
    
             demoJMSTopic.setSubDeploymentName()
    
    else: print "Found DemoSupplierTopic_UDD topic – noop"
    

16.3 Managing Space in the SOA Infrastructure Database

Although not all composites may use the database frequently, the service engines generate a considerable amount of data in the CUBE_INSTANCE and MEDIATOR_INSTANCE schemas. Lack of space in the database may prevent SOA composites from functioning.

To manage space in the SOA infrastructure database:

16.4 Configuring UMS Drivers

UMS driver configuration is not automatically propagated in a SOA or BAM cluster. To propagate UMS driver configuration in a cluster:

Create the UMS driver configuration file in preparation for possible failovers by forcing a server migration, and copy the file from the source node.

For example, to create the file for BAM:

  1. Configure the driver for WLS_BAM1 in BAMHOST1.

  2. Force a failover of WLS_BAM1 to BAMHOST2. Verify the following directory structure for the UMS driver configuration in the failover node:

    cd ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/tmp/_WL_user/ums_driver_name/*/configuration/
    

    (where '*' represents a directory whose name is randomly generated by WLS during deployment, for example, "3682yq").

  3. Do a remote copy of the driver configuration file from BAMHOST1 to BAMHOST2:

    BAMHOST1> scp ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/tmp/_WL_user/ums_driver_name/*/configuration/driverconfig.xml 
    
    oracle@BAMHOST2:ORACLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/tmp/_WL_user/ums_driver_name/*/configuration/
    
  4. Restart the driver for these changes to take effect.

    To restart the driver:

    1. Log on to the Oracle WebLogic Administration Console.

    2. Expand the environment node on the navigation tree.

    3. Click on Deployments.

    4. Select the driver.

    5. Click Stop->When work completes and confirm the operation.

    6. Wait for the driver to transition to the "Prepared" state (refresh the administration console page, if required).

    7. Select the driver again, and click Start->Servicing all requests and confirm the operation.

Verify in Oracle Enterprise Manager Fusion Middleware Control that the properties for the driver have been preserved.

16.5 Scaling Up the Topology (Adding Managed Servers to Existing Nodes)

When you scale up the topology, you already have a node that runs a managed server that is configured with Fusion Middleware components, or a managed server with WSM-PM. The node contains a WebLogic Server home and an Oracle Fusion Middleware SOA home in shared storage. Use these existing installations (such as WebLogic Server home, Oracle Fusion Middleware home, and domain directories), when you create the new managed servers called WLS_SOA and WLS_WSM. You do not need to install WLS or SOA binaries at a new location or to run pack and unpack.

When you scale up a server that uses server migration, plan for your appropriate capacity and resource allocation needs. Take the following scenario for example:

In this scenario, a situation may occur where all servers (server1, server2, server3 and admin server) end up running in a node1 or node2. This means each node needs to be designed with enough resources to sustain the worst case scenario where all servers using server migration end in one single node (as defined in the server migration candidate machine configuration).

16.5.1 Scale-up Procedure for Oracle SOA

To scale up the SOA topology:

  1. Using the Oracle WebLogic Server Administration Console, clone WLS_SOA1 or WLS_WSM1 into a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

    To clone a managed server:

    1. From the Domain Structure window of the Oracle WebLogic Server Administration Console, expand the Environment node and then Servers. The Summary of Servers page appears.

    2. Click Lock & Edit and select the managed server that you want to clone (for example, WLS_SOA1).

    3. Click Clone.

    4. Name the new managed server WLS_SOAn, where n is a number that identifies the new managed server. In this case, you are adding a new server to Node 1, where WLS_SOA1 was running.

    For the remainder of the steps, you are adding a new server to SOAHOST1, which is already running WLS_SOA1.

  2. For the listen address, assign the host name or IP to use for this new managed server. If you are planning to use server migration as recommended for this server, enter the VIP (also called a floating IP) to enable it to move to another node. The VIP should be different from the one used by the managed server that is already running.

  3. For WLS_WSM servers, run the Java Object Cache configuration utility again to include the new server in the JOC distributed cache as described in Section 8.5.5, "Configuring the Java Object Cache for Oracle WSM." You can use the same discover port for multiple WLS_WSM servers in the same node. Repeat the steps provided in Section 8.5.5 for each WLS_WSM server and the server list is updated.

  4. Create JMS servers for SOA and UMS on the new managed server.

    Note:

    You do not have to create JMS servers for SOA and UMS on the new managed server if you are scaling up the WLS_WSM managed server or the BAM Web Applications system. This procedure is required only if you are scaling up the WLS_SOA managed servers.

    To create the JMS servers for SOA and UMS:

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new SOAJMSServer (which will be created in a later step) and name it, for example, SOAJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment," as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      
    2. Create a new JMS server for SOA: for example, SOAJMSServer_N. Use the SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N server to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMS JMS server (which will be created in a later step) and name it, for example, UMSJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment," as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    4. Create a new JMS Server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N server to the recently created managed server (WLS_SOAn).

    5. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    6. Update the SubDeployment Targets for SOA and UMS to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, or UMSJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N). Click Save and Activate.

  5. Configuring Oracle Coherence for deploying composites for the new server as described in Section 9.4, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field must be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1VHNn
    
  6. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment."

    From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  7. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAN managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  8. Configure server migration for the new managed server. To configure server migration using the Oracle WebLogic Server Administration Console:

    Note:

    Because this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. The floating IP for the new SOA managed server should also be already present.

    1. In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page appears.

    2. Click the name of the server (represented as a hyperlink) in Name column of the table for which you want to configure migration. The settings page for the selected server appears.

    3. Click the Migration subtab.

    4. In the Migration Configuration section, select the servers that participate in migration in the Available window by clicking the right arrow. Select the same migration targets as for the servers that already exist on the node.

      For example, for new managed servers on SOAHOST1, which is already running WLS_SOA1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_SOA2, select SOAHOST1.

      Note:

      The appropriate resources must be available to run the managed servers concurrently during migration.

    5. Choose the Automatic Server Migration Enabled option and click Save.

      This enables the Node Manager to start a failed server on the target node automatically.

    6. Restart the Administration Server, managed servers, and Node Manager.

      To restart the Administration Server, use the procedure in Section 8.4.3, "Starting the Administration Server on SOAHOST1.".

  9. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example: SOAHOST1VHN1:8011,SOAHOST2VHN1:8011,SOAHOST1VHN1:8001

    5. Save and activate the changes.

  10. Test server migration for this new server. To test migration, perform the following from the node where you added the new server:

    1. Stop the WLS_SOAn managed server by running the following command:

      kill -9 pid
      

      You can identify the PID (process ID) of the node using the following command:

      ps -ef | grep WLS_SOAn
      
    2. Monitor the Node Manager Console for a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to attempt a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. Node Manager logs a message indicating that the server will not be restarted again locally.

16.5.2 Scale-up Procedure for Oracle BPM

To scale up the SOA topology:

  1. Using the Oracle WebLogic Server Administration Console, clone WLS_SOA1 into a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

    To clone a managed server:

    1. From the Domain Structure window of the Oracle WebLogic Server Administration Console, expand the Environment node and then Servers. The Summary of Servers page appears.

    2. Click Lock & Edit and select the managed server that you want to clone (for example, WLS_SOA1).

    3. Click Clone.

    4. Name the new managed server WLS_SOAn, where n is a number that identifies the new managed server. In this case, you are adding a new server to Node 1, where WLS_SOA1 was running.

    For the remainder of the steps, you are adding a new server to SOAHOST1, which is already running WLS_SOA1.

  2. For the listen address, assign the host name or IP to use for this new managed server. If you are planning to use server migration as recommended for this server, enter the VIP (also called a floating IP) to enable it to move to another node. The VIP should be different from the one used by the managed server that is already running.

  3. Create JMS servers for SOA and UMS on the new managed server.

    To create the JMS servers for SOA, UMS and BPM:

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new SOAJMSServer (which will be created in a later step) and name it, for example, SOAJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment," as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      
    2. Create a new JMS server for SOA: for example, SOAJMSServer_N. Use the SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N server to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMS JMS server (which will be created in a later step) and name it, for example, UMSJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment," as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    4. Create a new JMS Server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N server to the recently created managed server (WLS_SOAn).

    5. Create a new persistence store for the new BPMJMSServer, for example, BPMJMSFileStore_N. Specify the path for the store. The directory should be on shared storage as recommended in Section 4.3, "About Recommended Locations for the Different Directories." For example:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      You can also assign SOAJMSFileStore_N as store for the new BPM JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    6. Create a new JMS Server for BPM, for example, BPMJMSServer_N. Use the BPMJMSFileStore_N for this JMSServer. Target the BPMJMSServer_N Server to the recently created Managed Server (WLS_SOAn).

    7. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    8. Update the SubDeployment Targets for SOA, UMS and BPM JMS Modules to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule, for BPM: BPMJMSMOdule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N). Click Save and Activate.

  4. Configuring Oracle Coherence for deploying composites for the new server as described in Section 9.4, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field must be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1VHNn
    
  5. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment."

    From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  6. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAN managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  7. Configure server migration for the new managed server. To configure server migration using the Oracle WebLogic Server Administration Console:

    Note:

    Because this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. The floating IP for the new SOA managed server should also be already present.

    1. In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page appears.

    2. Click the name of the server (represented as a hyperlink) in Name column of the table for which you want to configure migration. The settings page for the selected server appears.

    3. Click the Migration subtab.

    4. In the Migration Configuration section, select the servers that participate in migration in the Available window by clicking the right arrow. Select the same migration targets as for the servers that already exist on the node.

      For example, for new managed servers on SOAHOST1, which is already running WLS_SOA1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_SOA2, select SOAHOST1.

      Note:

      The appropriate resources must be available to run the managed servers concurrently during migration.

    5. Choose the Automatic Server Migration Enabled option and click Save.

      This enables the Node Manager to start a failed server on the target node automatically.

    6. Restart the Administration Server, managed servers, and Node Manager.

      To restart the Administration Server, use the procedure in Section 8.4.3, "Starting the Administration Server on SOAHOST1.".

  8. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example:

      SOAHOST1VHN1:8011,SOAHOST2VHN1:8011,SOAHOST1VHN1 :8001
      
    5. Save and activate the changes.

  9. Test server migration for this new server. To test migration, perform the following from the node where you added the new server:

    1. Stop the WLS_SOAn managed server using the following command on the managed server PID:

      kill -9 pid
      

      You can identify the PID of the node using the following command:

      ps -ef | grep WLS_SOAn
      
    2. Monitor the Node Manager Console for a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to attempt a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. Node Manager logs a message indicating that the server will not be restarted again locally.

16.5.3 Scale-up Procedure for Oracle BAM

You cannot scale a BAM Server managed server because the BAM Server runs in active-passive mode. However, you can scale a BAM Web Applications server.

There are two ways to scale a BAM Web Applications server:

To scale up a BAM Web Applications server, follow the steps in Section 16.5.1, "Scale-up Procedure for Oracle SOA" excluding Step 5, Configuring Oracle Coherence for deploying composites for the new server.

16.5.4 Scale-up Procedure for Oracle Service Bus

You can scale up the Oracle Service Bus servers by adding new managed servers to nodes that are already running one or more managed servers.

Prerequisites

Before scaling up you Oracle Service Bus servers, review the following prerequisites:

  • You already have a cluster that runs managed servers configured with Oracle Service Bus components.

  • The nodes contain Middleware home, an Oracle HOME (SOA and Oracle Service Bus) and a domain directory for existing managed servers.

  • The source managed server you clone already exists on the node where you want to run the new managed server.

You can use the existing installations (the Middleware home, and domain directories) for creating new WLS_OSB servers. You do not need to install SOA or Oracle Service Bus binaries in a new location, or run pack and unpack.

To scale up the Oracle Service Bus servers:

  1. Using the Administration Console, clone WLS_OSBn into a new managed server:

    1. Select Environment and then Servers.

    2. Select the managed server that you want to clone (for example, WLS_OSB1).

    3. Select Clone.

      Name the new managed server WLS_OSBn, where n is a number to identify the new managed server.

      For these steps you are adding a new server to SOAHOST1, which is already running WLS_OSB1.

  2. For the listen address, assign the virtual host name to use for this new managed server.

    If you are planning to use server migration as recommended for this server, this virtual host name allows it to move to another node. The virtual host name should be different from those in use by other managed servers (it may be in the same or different domain) that are running in the nodes used by the Oracle Service Bus/SOA domain.

    To set the managed server listen address:

    1. Log into the Oracle WebLogic Server Administration Console.

    2. In the Change Center, click Lock & Edit.

    3. In the Domain Structure window expand the Environment node.

    4. Click Servers.

      The Summary of Servers page appears.

    5. In the Names column of the table, select the managed server with the listen address you want to update.

      The Settings page for that managed server appears.

    6. Set the Listen Address to SOAHOST1VHNn and click Save.

      Restart the managed server for the change to take effect.

  3. Update the cluster address to include the new server:

    1. In the Administration console, select Environment, and then Cluster.

    2. Click the OSB_Cluster server.

      The Settings Screen for the OSB_Cluster appears.

    3. In the Change Center, click Lock & Edit.

    4. Add the new server's address and port to the Cluster Address field. For example:

      SOAHOST1VHN2:8011,SOAHOST2VHN2:8011,SOAHOST1VHNn:8011
      
  4. If your Oracle Service Bus configuration includes one or more business services that use JMS request/response functionality, perform the following procedure using the Oracle Service Bus Console after adding the new managed server to the cluster:

    1. In the Change Center, click Create to create a session.

    2. Using the Project Explorer, locate and select a business service that uses JMS request/response.

      Business services of this type display Messaging Service as their Service Type.

    3. At the bottom of the View Details page, click Edit.

    4. If there is a cluster address in the endpoint URI, add the new server to the cluster address.

    5. In the Edit a Business Service - Summary page, click Save.

    6. Repeat the previous steps for each remaining business service that uses JMS request/response.

    7. In the Change Center, click Activate.

    8. Restart the managed server.

    9. Restart the Administration Server.

      The business services are now configured for operation in the extended domain.

      Note:

      For business services that use a JMS MesageID correlation scheme, edit the connection factory settings to add an entry to the table mapping managed servers to queues. For information about configuring queues and topic destinations, see "JMS Server Targeting" in Oracle Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server.

  5. If your Oracle Service Bus configuration includes one or more proxy services that use JMS endpoints with cluster addresses, perform the following procedure using the Oracle Service Bus Console after adding the new managed server to the cluster:

    1. In the Change Center, click Create to create a session.

    2. Using the Project Explorer, locate and select a proxy service that uses JMS endpoints with cluster addresses.

    3. At the bottom of the View Details page, click Edit.

    4. If there is a cluster address in the endpoint URI, add the new server to the cluster address.

    5. On the Edit a Proxy Service - Summary page, click Save.

    6. Repeat the previous steps for each remaining proxy service that uses JMS endpoints with cluster addresses.

    7. In the Change Center, click Activate.

    8. Restart the managed server.

      The proxy services are now configured for operation in the extended domain.

  6. Update the Oracle Service Bus result cache Coherence configuration for the new server:

    1. Log into Oracle WebLogic Server Administration Console.

    2. In the Change Center, click Lock & Edit.

    3. In the Domain Structure window, expand the Environment node.

    4. Click Servers.

      The Summary of Servers page appears.

    5. Click the name of the server (a hyperlink) in the Name column of the table.

      The settings page for the selected server appears.

    6. Click the Server Start tab.

      Enter the following for WLS_OSBn (on a single line, without a carriage returns):

      -DOSB.coherence.localhost=SOAHOST1vhnn -DOSB.coherence.localport=7890
      -DOSB.coherence.wka1= SOAHOST1vhn2 -DOSB.coherence.wka1.port=7890 
      -DOSB.coherence.wka2= SOAHOST2vhn2 -DOSB.coherence.wka1.port=7890
      

      Note:

      For this configuration servers WLS_OSB1 and WLS_OSB2 must be running (listening on Virtual Host Names SOAHOST1VHN and SOAHOST2VHN as used in the rest of the guide) when WLS_OSBn is started. This allows WLS_OSBn to join the coherence cluster started by either WLS_OSB1 or WLS_OSB2 using the WKA addresses specified. In addition, make sure WLS_OSB1 and WLS_OSB2 are started before WLS_OSBn is started when all three servers are restarted. This ensures WLS_OSBn joins the cluster started by one of WLS_OSB1 or WLS_OSB2. If the order in which the servers start is not important, add the host and port for WLS_OSBn as WKA for WLS_OSB1 and WLS_OSB2, and also add WLS_OSBn as WKA for WLS_OSBn.

    7. Save and activate the changes.

      Restart the Oracle Service Bus servers for the changes to take effect.

  7. Create JMS Servers and persistent stores for Oracle Service Bus reporting/internal destinations on the new managed server.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new WseeJMSServer and name it, for example, OSB_rep_JMSFileStore_N. Specify the path for the store. This should be a directory on shared storage, as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment." For example:

      ORACLE_BASE/admin/DOMAIN_NAME/cluster_name/jms/
      

      Target the store the new cloned server (WLS_OSBn).

    2. Create a new JMS Server for Oracle Service Bus, for example, OSB_rep_JMSServer_N. Use the OSB_rep_JMSFileStore_N for this JMSServer. Target the OSB_rep_JMSServer_N Server to the recently created Managed Server (WLS_OSBn)

    3. Update the SubDeployment targets for the "jmsResources" Oracle Service Bus JMS Module to include the recently created OSB JMS Server:

      Expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears.

      Click jmsResources (a hyperlink in the Names column of the table). The Settings page for jmsResources appears.

      Click the SubDeployments tab. The subdeployment module for jmsresources appears.

      Note:

      This subdeployment module name for destinations is a random name in the form of wlsbJMSServerXXXXXX resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_OSB1 and WLS_OSB2).

      Click the wlsbJMSServerXXXXXX subdeployment and update the targets to include the new OSB_rep_JMSServer_n server.

  8. Create JMS Servers, persistent stores and destinations for OSB JAX-RPC on the new managed server.

    Note:

    WebLogic Advanced Web Services for JAX-RPC Extension uses regular (non-distributed) destinations to ensure that a locally processed request on a service gets enqueued only to a local member.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new WseeJMSServer and name it, for example, Wsee_rpc_JMSFileStore_N. Specify the path for the store. This should be a directory on shared storage, as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment."

    2. Create a new JMS Server for OSB JAX-RPC, for example, OSB_rpc_JMSServer_N. Use the Wsee_rpc_JMSFileStore_N for this JMSServer. Target the OSB_rpc_JMSServer_N Server to the recently created Managed Server (WLS_OSBn).

    3. Update the WseeJMSModule OSB JMS Module with destinations and the recently created OSB JMS Server by expanding the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click WseeJmsModule (a hyperlink in the Names column of the table). The Settings page for WseeJmsModule appears. Follow steps d through v to complete this step.

    4. In the Change Center, click Lock & Edit and click New.

    5. Select Queue and click Save.

    6. Click Create a New Subdeployment.

    7. Accept the default name and click OK.

    8. Select OSB_rpc_JMSServer_n as the target and click Finish.

    9. Update the local JNDI name for the destination:

      In the Change Center, click Lock & Edit.

      In the Settings for the WseeJmsModule page, click the DefaultCallbackQueue-WseeJmsServer_auto_n destination.

      In the general Configuration tab, click Advanced.

      Update the local JNDI name to weblogic.wsee.DefaultCallbackQueue.

      Activate the changes.

    10. Repeat steps d through h for the DefaultQueue-WseeJmsServer_auto_n queue, using weblogic.wsee.DefaultQueue-WseeJmsServer_auto_n as the JNDi name and weblogic.wsee.DefaultQueue as the local JNDI name.

  9. Create a new SAF agent and target it to the newly added managed server:

    1. In the Oracle WebLogic Server Administration Console, expand Services, Messaging, and then Store-and-Forward Agents

    2. Add a new SAF agent ReliableWseeSAFAgent_auto_N.

    3. Select persistent store Wsee_rpc_JMSFileStore_N (persistent store created for OSB JAX-RPC).

    4. Target the SAF Agent to the new managed server and activate changes.

  10. Configure a TX persistent store for the new server in a location visible from the other nodes.

    1. From the Administration Console, select Server_name and then the Services tab.

    2. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  11. Disable host name verification for the new managed server. Before starting and verifying the WLS_OSBn managed server, disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn. You can ignore these steps if you have already disabled hostname verification for the source server from which the new server has been cloned (the hostname verification settings is propagated to the cloned server).

    To disable host name verification:

    1. In the Oracle Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window and click Servers.

      The Summary of Servers page appears.

    3. Select WLS_OSBn in the Names column of the table.

      The Settings page for the server appears.

    4. Click the SSL tab and click Advanced.

    5. Set Hostname Verification to None and click Save.

  12. If it is not already started, start the Node Manager on the node. To start the Node Manager, use the installation in shared storage from the existing nodes as follows:

    SOAHOSTN> WL_HOME/server/bin/startNodeManager
    
  13. Start and test the new managed server from the Administration Console.

    1. Shut down the existing managed servers in the cluster.

    2. Ensure that the newly created managed server, WLS_OSBn, is up.

    3. Access the application on the newly created managed server using the following URL:

      http://vip:port/sbinspection.wsil
      
  14. Configure Server Migration for the new managed server.

    Note:

    Since this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration. The floating IP for the new Oracle Service Bus managed server should already be present.

    To configure server migration:

    1. Log into the Administration Console.

    2. In the left pane, expand Environment and select Servers.

    3. Select the name of the new managed server for which you want to configure migration.

    4. Click the Migration tab.

    5. In the Available field, in the Migration Configuration section, select the machines to which migration is allowed and click the right arrow.

    6. Select the same migration targets used for the servers that already exist on the node.

      For example, for new managed servers on SOAHOST1, which is already running WLS_OSB1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_OSB2, select SOAHOST1.

      Make sure the appropriate resources are available to run the managed servers concurrently during migration.

    7. Select the Automatic Server Migration Enabled option and click Save.

      This enables the Node Manager to start a failed server on the target node automatically.

    8. Restart the Administration Server, managed servers, and Node Manager.

  15. Test server migration for this new server from the node where you added the new server:

    1. Stop the WLS_OSBn managed server by running the following command on the PID (process ID) of the managed server:

      kill -9 pid
      

      You can identify the PID of the node using the following command:

      ps -ef | grep WLS_OSBn
      

      Note:

      For Windows, you can terminate the Managed Server using the taskkill command. For example:

      taskkill /f /pid pid
      

      Where pid is the process ID of the Managed Server.

      To determine the process ID of the WLS_OSBn Managed Server, run the following command:

      MW_HOME\jrockit_160_20_D1.0.1-2124\bin\jps -l -v
      
    2. In the Node Manager Console you can see a message appears indicating that WLS_OSBn's floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of WLS_OSBn.

      Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again.

      Node Manager logs a message indicating that the server will not be restarted again locally.

      Note:

      After a server is migrated, to fail it back to its original node, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager starts the managed server on the machine to which it was originally assigned.

16.6 Scaling Out the Topology (Adding Managed Servers to New Nodes)

When you scale out the topology, you add new managed servers configured with SOA and or WSM-PM to new nodes.

Before performing the steps in this section, check that you meet these requirements:

Prerequisites

16.6.1 Scale-out Procedure for the Oracle SOA

To scale out the topology:

  1. On the new node, mount the existing FMW Home, which should include the SOA installation and the domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

  2. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following command:

    SOAHOSTn>cd ORACLE_COMMON_HOME/oui/bin/attachHome.sh
    SOAHOSTn>./attachHome.sh -jreLoc ORACLE_BASE/fmw/jrockit_160_<version>
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the $HOME/bea/beahomelist file and add MW_HOME to it.

  3. Log in to the Oracle WebLogic Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

  5. Update the machine's Node Manager's address to map the IP of the node that is being used for scale out.

  6. Use the Oracle WebLogic Server Administration Console to clone WLS_SOA1/WLS_WSM1 into a new managed server. Name it WLS_SOAn/WLS_WSMn, where n is a number. Assign it to the new machine created above.

    Note:

    These steps assume that you are adding a new server to node n, where no managed server was running previously.

  7. Assign the host name or IP to use for the new managed server for the listen address of the managed server.

    If you are planning to use server migration for this server (which Oracle recommends) this should be the VIP (also called a floating IP) for the server. This VIP should be different from the one used for the existing managed server.

  8. For WLS_WSM servers, run the Java Object Cache configuration utility again to include the new server in the JOC distributed cache as described in Section 8.5.5, "Configuring the Java Object Cache for Oracle WSM."

  9. Create JMS Servers for SOA and UMS on the new managed server.

    Note:

    You do not have to create JMS servers for SOA and UMS on the new managed server if you are scaling up the WSM_WSM managed server or the BAM Web Applications system. This procedure is required only if you are scaling up the WLS_SOA managed servers

    Create the JMS servers for SOA and UMS as follows:

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new SOAJMSServer (which will be created in a later step) and name it, for example, SOAJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/
      
    2. Create a new JMS server for SOA, for example, SOAJMSServer_N. Use the SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N Server to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMSJMSServer, and name it, for example, UMSJMSFileStore_N. As the directory for the persistent store, specify the path recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    4. Create a new JMS server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N Server to the recently created managed server (WLS_SOAn).

    5. Update the SubDeployment targets for the SOA JMS Module to include the recently created SOA JMS server. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click SOAJMSModuleUDDs (represented as a hyperlink in the Names column of the table). The Settings page for SOAJMSModuleUDDs appears. Open the SubDeployments tab. The SOAJMSSubDM subdeployment appears.

      Note:

      This subdeployment module results from updating the JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2) with the Uniform Distributed Destination Script (soa-createUDD.py), which is required for the initial Enterprise Deployment topology setup.

      Click on it. Add the new JMS server for SOA called SOAJMSServer_N to this subdeployment. Click Save.

    6. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    7. Update the SubDeployment Targets for SOA and UMS JMS Modules (if applicable) to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, or UMSJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N). Click Save and Activate.

  10. Run the pack command on SOAHOST1 to create a template pack as follows:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=ORACLE_BASE/admin/domain_name/aserver/domain_name
    -template=soadomaintemplateScale.jar -template_name=soa_domain_templateScale
    

    Run the following command on SOAHOST1 to copy the template file created to SOAHOSTN

    scp soadomaintemplateScale.jar oracle@SOAHOSTN:/ ORACLE_COMMON_HOME/common/bin
    

    Run the unpack command on SOAHOSTN to unpack the template in the managed server domain directory as follows:

    SOAHOSTN> cd ORACLE_COMMON_HOME/common/bin
    
    SOAHOSTN> ./unpack.sh -domain=ORACLE_BASE/admin/domain_name
    /mserver/domain_name/
    -template=soadomaintemplateScale.jar
    -app_dir=ORACLE_BASE/admin/domain_name/mserver/applications 
    

    Note:

    The configuration steps provided in this enterprise deployment topology are documented with the assumption that a local (per node) domain directory is used for each managed server.

  11. Configuring Oracle Coherence for deploying composites for the new server as described in Section 9.4, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field needs to be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1VHNn
    
  12. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment."

    From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  13. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  14. Start Node Manager on the new node. To start Node Manager, use the installation in shared storage from the existing nodes, and start Node Manager by passing the host name of the new node as a parameter as follows:

    SOAHOSTN> WL_HOME/server/bin/startNodeManager
    
  15. Start and test the new managed server from the Oracle WebLogic Server Administration Console.

    1. Ensure that the newly created managed server, WLS_SOAn, is running.

    2. Access the application on the load balancer using the following URL:

      https://soa.mycompany.com/soa-infra
      

      The application should be functional.

      Note:

      The HTTP Servers in the topology should round robin requests to the newly added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in Oracle HTTP Server's mod_wl_ohs.conf file. However, routing to new servers in the cluster takes place only if at least one of the servers listed in the WebLogicCluster directive is running.

  16. Configure server migration for the new managed server.

    Note:

    Because this new node uses an existing shared storage installation, the node already is using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges. The floating IP for the new SOA Managed Server is already present in the new node.

    Log into the Oracle WebLogic Server Administration Console and configure server migration.

    To configure server migration:

    1. Expand the Environment node in the Domain Structure windows and then choose Servers. The Summary of Servers page appears.

    2. Select the server (represented as hyperlink) for which you want to configure migration from the Names column of the table. The Setting page for that server appears.

    3. Click the Migration tab.

    4. In the Available field of the Migration Configuration section, click the right arrow to select the machines to which to allow migration.

      Note:

      Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional managed server.

    5. Select Automatic Server Migration Enabled. This enables the Node Manager to start a failed server on the target node automatically.

    6. Click Save.

    7. Restart the Administration Server, managed servers, and the Node Manager.

      To restart the Administration Server, use the procedure in Section 8.4.3, "Starting the Administration Server on SOAHOST1.".

  17. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example: SOAHOST1VHN1:8011,SOAHOST2VHN1:8011,SOAHOSTNVHN1:8001

    5. Save and activate the changes.

  18. Test server migration for this new server from the node where you added the new server:

    1. Abruptly stop the WLS_SOAn managed server by running the following command;

      kill -9 pid
      

      You can identify the PID (process ID) of the node using the following command:

      ps -ef | grep WLS_SOAn
      
    2. In the Node Manager Console you should see a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. Now Node Manager should log a message indicating that the server will not be restarted again locally.

16.6.2 Scaling out the BPM Topology

To scale out the topology:

  1. On the new node, mount the existing FMW Home, which should include the SOA installation and the domain directory, and ensure that the new node has access to this directory, just like the rest of the nodes in the domain.

  2. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following command:

    SOAHOSTn>cd ORACLE_COMMON_HOME/oui/bin/attachHome.sh
    SOAHOSTn>./attachHome.sh -jreLoc ORACLE_BASE/fmw/jrockit_160_<version>
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the $HOME/bea/beahomelist file and add MW_HOME to it.

  3. Log in to the Oracle WebLogic Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

  5. Update the machine's Node Manager's address to map the IP of the node that is being used for scale out.

  6. Use the Oracle WebLogic Server Administration Console to clone WLS_SOA1 into a new managed server. Name it WLS_SOAn, where n is a number. Assign it to the new machine created above.

    Note:

    These steps assume that you are adding a new server to node n, where no managed server was running previously.

  7. Assign the host name or IP to use for the new managed server for the listen address of the managed server.

    If you are planning to use server migration for this server (which Oracle recommends) this should be the VIP (also called a floating IP) for the server. This VIP should be different from the one used for the existing managed server.

  8. Create JMS Servers for SOA, BPM, (if applicable) and UMS on the new managed server.

    Create the JMS servers for SOA and UMS as follows:

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new SOAJMSServer (which will be created in a later step) and name it, for example, SOAJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms/
      
    2. Create a new JMS server for SOA, for example, SOAJMSServer_N. Use the SOAJMSFileStore_N for this JMS server. Target the SOAJMSServer_N Server to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMSJMSServer, and name it, for example, UMSJMSFileStore_N. As the directory for the persistent store, specify the path recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment" as the directory for the JMS persistent stores:

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    4. Create a new JMS server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N Server to the recently created managed server (WLS_SOAn).

    5. Create a new persistence store for the new BPMJMSServer, for example, BPMJMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment."

      ORACLE_BASE/admin/domain_name/cluster_name/jms
      

      Note:

      You can also assign SOAJMSFileStore_N as store for the new BPM JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    6. Create a new JMS Server for BPM, for example, BPMJMSServer_N. Use the BPMJMSFileStore_N for this JMSServer. Target the BPMJMSServer_N Server to the recently created Managed Server (WLS_SOAn).

    7. Update the SubDeployment targets for the SOA JMS Module to include the recently created SOA JMS server. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click SOAJMSModuleUDDs (represented as a hyperlink in the Names column of the table). The Settings page for SOAJMSModuleUDDs appears. Open the SubDeployments tab. The SOAJMSSubDM subdeployment appears.

      Note:

      This subdeployment module results from updating the JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2) with the Uniform Distributed Destination Script (soa-createUDD.py), which is required for the initial Enterprise Deployment topology setup.

      Click on it. Add the new JMS server for SOA called SOAJMSServer_N to this subdeployment. Click Save.

    8. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    9. Update the SubDeployment Targets for SOA, UMS and BPM JMS Modules (if applicable) to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule, for BPM: BPMJMSMOdule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N). Click Save and Activate.

  9. Run the pack command on SOAHOST1 to create a template pack as follows:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=ORACLE_BASE/admin/domain_name/aserver/domain_name
    -template=soadomaintemplateScale.jar -template_name=soa_domain_templateScale
    

    Run the following command on SOAHOST1 to copy the template file created to SOAHOSTN

    scp soadomaintemplateScale.jar oracle@SOAHOSTN:/ ORACLE_COMMON_HOME/common/bin
    

    Run the unpack command on SOAHOSTn to unpack the template in the managed server domain directory as follows:

    SOAHOSTN> cd ORACLE_COMMON_HOME/common/bin
    
    SOAHOSTN> ./unpack.sh -domain=ORACLE_BASE/admin/domain_name
    /mserver/domain_name/
    -template=soadomaintemplateScale.jar
    -app_dir=ORACLE_BASE/admin/domain_name/mserver/applications 
    

    Note:

    The configuration steps provided in this enterprise deployment topology are documented with the assumption that a local (per node) domain directory is used for each managed server.

  10. Configuring Oracle Coherence for deploying composites for the new server as described in Section 9.4, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field needs to be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1VHNn
    
  11. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment."

    From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  12. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  13. Start Node Manager on the new node. To start Node Manager, use the installation in shared storage from the existing nodes, and start Node Manager by passing the host name of the new node as a parameter as follows:

    SOAHOSTN> WL_HOME/server/bin/startNodeManager
    
  14. Start and test the new managed server from the Oracle WebLogic Server Administration Console.

    1. Ensure that the newly created managed server, WLS_SOAn, is running.

    2. Access the application on the load balancer using the following URL:

      https://soa.mycompany.com/soa-infra
      

      The application should be functional.

      Note:

      The HTTP Servers in the topology should round robin requests to the newly added server (a few requests, depending on the number of servers in the cluster, may be required to hit the new server). Its is not required to add all servers in a cluster to the WebLogicCluster directive in Oracle HTTP Server's mod_wl_ohs.conf file. However, routing to new servers in the cluster takes place only if at least one of the servers listed in the WebLogicCluster directive is running.

  15. Configure server migration for the new managed server.

    Note:

    Because this new node uses an existing shared storage installation, the node already is using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges. The floating IP for the new SOA Managed Server is already present in the new node.

    Log into the Oracle WebLogic Server Administration Console and configure server migration.

    To configure server migration:

    1. Expand the Environment node in the Domain Structure windows and then choose Servers. The Summary of Servers page appears.

    2. Select the server (represented as hyperlink) for which you want to configure migration from the Names column of the table. The Setting page for that server appears.

    3. Click the Migration tab.

    4. In the Available field of the Migration Configuration section, click the right arrow to select the machines to which to allow migration.

      Note:

      Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional managed server.

    5. Select Automatic Server Migration Enabled. This enables the Node Manager to start a failed server on the target node automatically.

    6. Click Save.

    7. Restart the Administration Server, managed servers, and the Node Manager.

      To restart the Administration Server, use the procedure in Section 8.4.3, "Starting the Administration Server on SOAHOST1.".

  16. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example:

      SOAHOST1VHN1:8011,SOAHOST2VHN1:8011,SOAHOSTNVHN1:8001
      
    5. Save and activate the changes.

  17. Test server migration for this new server from the node where you added the new server:

    1. Abruptly stop the WLS_SOAn managed server by running the following command:

      kill -9 pid
      

      You can identify the PID (process ID) of the node using the following command:

      ps -ef | grep WLS_SOAn
      
    2. In the Node Manager Console you should see a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. Now Node Manager should log a message indicating that the server will not be restarted again locally.

16.6.3 Scale-out Procedure for Oracle BAM

You cannot scale a BAM Server managed server because the BAM Server runs in active-passive mode. However, you can scale a BAM Web Applications server.

There are two ways to scale a BAM Web Applications server:

To scale out a BAM Web Applications server, follow the steps in Section 16.6.1, "Scale-out Procedure for the Oracle SOA" excluding Step 11, Configuring Oracle Coherence for deploying composites for the new server.

16.6.4 Scale-out Procedure for Oracle Service Bus

When you scale out the topology, you add new managed servers configured with Oracle Service Bus to the new nodes.

Prerequisites

Before scaling out the Oracle Service Bus topology, make sure you meet these prerequisites:

  • There must be existing nodes running managed servers configured with Oracle Service Bus within the topology.

  • The new node optionally can access the existing home directories for WebLogic Server and Oracle Service Bus installation. Use the existing installations in shared storage for creating a new WLS_OSB managed server. You do not need to install WebLogic Server or Oracle Service Bus binaries in every new location in this case, but you do need to run the pack and unpack commands to bootstrap the domain configuration in the new node, unless you are scaling the Oracle Service Bus server to machines containing other servers of the same domain (the SOA servers).

  • If there is no existing installation in shared storage, install WebLogic Server, SOA, and Oracle Service Bus in the new nodes.

  • When multiple servers in different nodes share an ORACLE_HOME or WL_HOME, keep the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and attach an installation in a shared storage to it, use the attachHome.sh file located in the following directory:

    ORACLE_HOME/oui/bin/
    

    To update the Middleware home list to add or remove a WL_HOME, edit the beahomelist file located in the following directory:

    user_home/bea/
    

To scale out the topology:

  1. On the new node, mount the existing Middleware Home. It should include the Oracle Service Bus and SOA (if homes are shared) installation and the domain directory. Ensure that the new node has access to this directory, just like other nodes in the domain.

  2. Attach ORACLE_HOME in shared storage to the local Oracle Inventory using the following command:

    SOAHOSTn>cd ORACLE_BASE/product/fmw/soa/
    SOAHOSTn>./attachHome.sh -jreLoc ORACLE_BASE/fmw/jrockit_160_<version>
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the beahomelist file located in the following directory:

    MW_HOME/bea/
    

    Add ORACLE_BASE/product/fmw to the list.

  3. Log in to the Oracle WebLogic Administration Console.

  4. Create a new machine for the new node that will be used, and add the machine to the domain.

  5. Update the machine's Node Manager's address to map the IP of the node that is being used for scale out.

  6. Use the Oracle WebLogic Server Administration Console to clone WLS_OSB1 into a new managed server. Name it WLS_OSBn, where n is a number, and assign it to the new machine.

    Note:

    For these steps, you are adding a new server to node n, where no managed server was running previously.

  7. For the listen address, assign the virtual host name to use for this new managed server. If you are planning to use server migration as recommended for this server, this virtual host name allows it to move to another node. The virtual host name should be different from those used by other managed servers (may be in the same or different domain) that are running in the nodes used by the OSB/SOA domain.

    1. Log into the Oracle WebLogic Server Administration Console.

    2. In the Change Center, click Lock & Edit.

    3. Expand the Environment node in the Domain Structure window.

    4. Click Servers.

      The Summary of Servers page appears.

    5. Select the managed server with listen addresses you want to update in the Names column of the table.

      The Setting page for that managed server appears.

    6. Set the Listen Address to SOAHOSTnVHN1 and click Save.

    7. Save and activate the changes.

    8. Restart the managed server.

  8. Update the cluster address to include the new server:

    1. Select Environment, and then Cluster from the Administration Console.

    2. Click the OSB_Cluster server.

      The Settings Screen for the OSB_Cluster appears.

    3. In the Change Center, click Lock & Edit.

    4. Add the new server's address and port to the Cluster Address field. For example:

      SOAHOST1VHN1:8011,SOAHOST2VHN1:8011,SOAHOSTNVHN1:8011
      
  9. Create JMS servers and persistent stores for Oracle Service Bus reporting/internal destinations on the new managed server.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new WseeJMSServer and name it, for example, OSB_rep_JMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment."

      Note:

      This directory must exist before the managed server is started or the start operation fails.

      ORACLE_BASE/admin/domain_name/cluster_name/jms/OSB_rep_JMSFileStore _N
      
    2. Create a new JMS Server for Oracle Service Bus, for example, OSB_rep_JMSServer_N. Use the OSB_rep_JMSFileStore_N for this JMSServer. Target the OSB_rep_JMSServer_N Server to the recently created managed server (WLS_OSBn).

    3. Update the SubDeployment targets for the jmsresources Oracle Service Bus JMS Module to include the recently created Oracle Service Bus JMS Server:

      Expand the Services node and then expand the Messaging node.

      Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears.

      Click jmsresources (a hyperlink in the Names column of the table). The Settings page for jmsResources appears.

      Open the SubDeployments tab. The subdeployment module for jmsresources appears.

      Note:

      This subdeployment module name is a random name in the form of wlsbJMSServerXXXXXX resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_OSB1 and WLS_OSB2).

      Click the wlsbJMSServerXXXXXX subdeployment and update the targets to include the new OSB_rep_JMSServer_N server.

  10. Create JMS Servers, persistent stores and destinations for OSB JAX-RPC on the new managed server.

    Note:

    WebLogic Advanced Web Services for JAX-RPC Extension uses regular (non-distributed) destinations to ensure that a locally processed request on a service gets enqueued only to a local member.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new WseeJMSServer and name it, for example, Wsee_rpc_JMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment."

      Note:

      This directory must exist before the managed server is started or the start operation fails.

      ORACLE_BASE/admin/DOMAIN_NAME/cluster_name/jms/Wsee_rpc_JMSFileStore_N
      
    2. Create a new JMS Server for Oracle Service Bus JAX-RPC, for example, OSB_rpc_JMSServer_N. Use the Wsee_rpc_JMSFileStore_N for this JMSServer. Target the OSB_rpc_JMSServer_N Server to the recently created Managed Server (WLS_OSBn).

    3. Update the WseeJMSModule Oracle Service Bus JMS Module with destinations and the recently created Oracle Service Bus JMS Server:

      Expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears.

      Click WseeJmsModule (a hyperlink in the Names column of the table). The Settings page for WseeJmsModule appears.

      Follow steps d through j to complete this step.

    4. In the Change Center, click Lock & Edit and click New.

    5. Select Queue and click Next.

    6. Enter DefaultCallbackQueue-WseeJmsServer_auto_n as name for the queue.

    7. Enter weblogic.wsee.DefaultCallbackQueue-WseeJmsServer_auto_n as the JNDI name and click Next.

    8. Click Create a New Subdeployment.

    9. Accept the default name and click OK.

    10. Select OSB_rpc_JMSServer_n as the target and click Finish.

    11. Activate the changes.

    12. Update the local JNDI name for the destination:

      In the Change Center, click Lock & Edit.

      In the Settings page for WseeJmsModule, click the DefaultCallbackQueue-WseeJmsServer_auto_n destination.

      In the general Configuration tab, click Advanced.

      Update the local JNDI name to weblogic.wsee.DefaultCallbackQueue.

      Activate the changes.

  11. Create a new SAF agent and target it to the newly added managed server:

    In the Oracle WebLogic Server Administration Console, expand Services, Messaging and then Store-and-Forward Agents, and add a new SAF agent, ReliableWseeSAFAgent_auto_N.

    Select persistent store Wsee_rpc_JMSFileStore_N (persistent store created for Oracle Service Bus JAX-RPC). Target the SAF Agent to the new managed server and activate changes.

  12. If your Oracle Service Bus configuration includes one or more business services that use JMS request/response functionality, follow this procedure using the Oracle Service Bus Console after adding the new managed server to the cluster:

    1. In the Change Center, click Create to create a session.

    2. Using the Project Explorer, locate and select a business service that uses JMS request/response.

      Business services of this type display Messaging Service as their Service Type.

    3. At the bottom of the View Details page, click Edit.

    4. If there is a cluster address in the endpoint URI, add the new server to the cluster address.

    5. On the Edit a Business Service - Summary page, click Save.

    6. Repeat the previous steps for each remaining business service that uses JMS request/response.

    7. In the Change Center, click Activate.

    8. Restart the managed server.

    9. Restart the Administration Server.

      The business services are now configured for operation in the extended domain.

      Note:

      For business services that use a JMS MesageID correlation scheme, edit the connection factory settings to add an entry to the table mapping managed servers to queues. For information on how to configure queues and topic destinations, see "JMS Server Targeting" in Oracle Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server.

  13. If your Oracle Service Bus configuration includes one or more proxy services that use JMS endpoints with cluster addresses, perform the following procedure using the Oracle Service Bus Console after adding the new managed server to the cluster:

    1. In the Change Center, click Create to create a session.

    2. Using the Project Explorer, locate and select a proxy service that uses JMS endpoints with cluster addresses.

    3. At the bottom of the View Details page, click Edit.

    4. If there is a cluster address in the endpoint URI, add the new server to the cluster address.

    5. On the Edit a Proxy Service - Summary page, click Save.

    6. Repeat the previous steps for each remaining proxy service that uses JMS endpoints with cluster addresses.

    7. In the Change Center, click Activate.

    8. Restart the managed server.

    The proxy services are now configured for operation in the extended domain.

  14. Update the Oracle Service Bus result cache Coherence configuration for the new server:

    1. Log into Oracle WebLogic Server Administration Console. In the Change Center, click Lock & Edit.

    2. In the Domain Structure window, expand the Environment node.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Click the name of the server (a hyperlink) in the Name column of the table.

      The settings page for the selected server appears.

    5. Click the Server Start tab.

    6. Click Advanced.

    7. Enter the following for WLS_OSBn (on a single line, without a carriage returns):

      -DOSB.coherence.localhost=SOAHOSTnvhn1 -DOSB.coherence.localport=7890 
      -DOSB.coherence.wka1= SOAHOST1vhn1 -DOSB.coherence.wka1.port=7890 
      -DOSB.coherence.wka2= SOAHOST2vhn1 -DOSB.coherence.wka1.port=7890
      

      Note:

      For the previous configuration, servers WLS_OSB1 and WLS_OSB2 are running when WLS_OSBn starts. This allows WLS_OSBn to join the coherence cluster started by either WLS_OSB1 or WLS_OSB2 using the WKA addresses specified. In addition, make sure WLS_OSB1 and WLS_OSB2 are started before WLS_OSBn is started when starting all three servers. This ensures WLS_OSBn joins the cluster started by either WLS_OSB1 or WLS_OSB2. For a configuration where the order in which the servers are started does not matter, add the host and port for WLS_OSBn as WKA for WLS_OSB1 and WLS_OSB2, and also add WLS_OSBn as WKA for WLS_OSBn.

    8. Save and activate the changes

      Restart the Oracle Service Bus servers.

  15. Run the pack command on SOAHOST1 to create a template pack as follows:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=MW_HOME/user_projects/domains/soadomain/
    -template=soadomaintemplateScale.jar -template_name=soa_domain_templateScale
    

    Run the following command on SOAHOST1 to copy the template file created to SOAHOSTn:

    scp soadomaintemplateScale.jar oracle@SOAHOSTN:/ORACLE_BASE/product/fmw/soa/common/bin
    

    Run the unpack command on SOAHOSTN to unpack the template in the managed server domain directory as follows:

    cd ORACLE_BASE/product/fmw/soa/common/bin
     
    ./unpack.sh -domain=ORACLE_BASE/product/fmw/user_projects/domains/soadomain/ -template=soadomaintemplateScale.jar
    

    Note:

    The configuration steps provided in this enterprise deployment topology are documented with the assumption that a local (per node) domain directory is used for each managed server.

  16. Configure a TX persistent store for the new server in a location visible from other nodes as indicated in the recommendations about shared storage

    1. From the Administration Console, select the server name, and then the Services tab.

    2. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  17. Disable host name verification for the new managed server.

    Before starting and verifying the WLS_OSBn managed server, disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn. If you have already disabled host name verification for the source server from which the new server has been cloned, you can skip this procedure (the hostname verification setting is propagated to the cloned server).

    To disable host name verification:

    1. In the Oracle Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_OSBn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None and click Save.

  18. Start the Node Manager on the new node using the installation in shared storage from the existing nodes. Pass the host name of the new node as a parameter:

    SOAHOSTN> WL_HOME/server/bin/startNodeManager new_node_ip
    
  19. Start and test the new managed server from the Oracle WebLogic Server Administration Console:

    1. Shut down all the existing managed servers in the cluster.

    2. Ensure that the newly created managed server, WLS_OSBn, is running. Access the application on the newly created managed server:

      http://vip:port/sbinspection.wsil
      

      The application should be functional.

  20. Configure server migration for the new managed server.

    Note:

    Since this new node is using an existing shared storage installation, it is already using a Node Manager and an environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges. The floating IP for the new Oracle Service Bus Managed Server is already present in the new node.

    To configure server migration:

    1. Log into the Administration Console.

    2. In the left pane, expand Environment and select Servers.

    3. Select the server (represented as hyperlink) for which you want to configure migration from the Names column of the table.

      The Settings page for that server appears.

    4. Click the Migration tab.

    5. In the Available field, in the Migration Configuration section, select the machines to which the server is to be migrated migration and click the right arrow.

      For example, for new managed servers on SOAHOST1, which is already running WLS_OSB1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_OSB2, select SOAHOST1.

      Note:

      Specify the least-loaded machine as the migration target for the new server. Complete the required capacity planning so that this node has enough available resources to sustain an additional managed server.

    6. Select the Automatic Server Migration Enabled option and click Save.

      This enables the Node Manager to start a failed server on the target node automatically.

    7. Restart the Administration Server, managed servers, and Node Manager.

  21. Test server migration for this new server from the node where you added the new server:

    1. Abruptly stop the WLS_OSBn managed server by running the following command on the PID (process ID) of the managed server:

      kill -9 pid
      

      You can identify the PID of the node using the following command:

      ps -ef | grep WLS_OSBn
      

      Note:

      For Windows, you can terminate the managed server using the taskkill command. For example:

      taskkill /f /pid pid
      

      Where pid is the process Id of the managed server.

      You can determine the process ID of the WLS_OSBn managed server using the following command:

      MW_HOME\jrockit_160_20_D1.0.1-2124\bin\jps -l -v
      
    2. In the Node Manager Console you can view a message indicating that WLS_OSBn's floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of WLS_OSBn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again.

      Now Node Manager logs a message indicating that the server will not be restarted again locally.

      Note:

      After a server is migrated, to fail it back to its original node/machine, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager will start the managed server on the machine to which it was originally assigned.

16.7 Performing Backups and Recoveries in the SOA Enterprise Deployments

For information about backing up the environment, see "Backing Up Your Environment" in the Oracle Fusion Middleware Administrator's Guide. For information about recovering your information, see "Recovering Your Environment" in the Oracle Fusion Middleware Administrator's Guide.

Table 16-1 lists the static artifacts to back up in the 11g SOA enterprise deployment.

Table 16-1 Static Artifacts to Back Up in the 11g SOA Enterprise Deployment

Type Host Location Tier

ORACLE HOME (DB)

CUSTDBHOST1 and CUSTDBHOST

The location is user-defined.

Data Tier

MW HOME (OHS)

WEBHOST1 and WEBHOST2

ORACLE_HOME/fmw

Web Tier

MW HOME (this includes the SOA home as well)

SOAHOST1 and SOAHOST2

MW_HOME

The SOA home is also under MW_HOME:

ORACLE_HOME

Application Tier

Installation-related files

 

OraInventory,

user_home/bea/beahomelist, oraInst.loc, oratab

N/A


Table 16-2 lists the runtime artifacts to back up in the 11g SOA enterprise deployment.

Table 16-2 Run-Time Artifacts to Back Up in the 11g SOA Enterprise Deployment

Type Host Location Tier

DOMAIN HOME

SOAHOST1 and SOAHOST2

ORACLE_BASE/admin/domain_name/ mserver/domain_name

Application Tier

Application artifacts (EAR and WAR files)

SOAHOST1 and SOAHOST2

Find the application artifacts by viewing all of the deployments through administration console

Application Tier

OHS instance home

WEBHOST1 and WEBHOST2

ORACLE_BASE/admin/instance_name

Web Tier

Oracle RAC databases

CUSTDBHOST1 and CUSTDBHOST2

The location is user-defined

Data Tier


Note:

ORACLE_HOME should be backed up if any changes are made to the XEngine configuration that are part of your B2B setup. These files are located in the following directory

ORACLE_HOME/soa/thirdparty/edifecs/XEngine

To back up ORACLE_HOME:

tar -cvpf fmwhomeback.tar MW_HOME

16.8 Preventing Timeouts for SQLNet Connections

Much of the Enterprise Deployment production deployment involves firewalls. Because database connections are made across firewalls, Oracle recommends that the firewall be configured so that the database connection is not timed out. For Oracle Real Application Clusters (Oracle RAC), the database connections are made on Oracle RAC VIPs and the database listener port. You must configure the firewall to not time out such connections. If such a configuration is not possible, set the*SQLNET.EXPIRE_TIME=n* parameter in the sqlnet.ora file, located in the following directory:

ORACLE_HOME/network/admin

The n indicates the time in minutes. Set this value to less than the known value of the timeout for the network device (that is, a firewall). For Oracle RAC, set this parameter in all of the Oracle home directories.

16.9 Recovering Failed BPEL and Mediator Instances

This section describes how to check and recover failed instances in BPEL, Mediator and other service engines.

Note:

For the steps that require you to run SQL statements, you connect to the database as the soainfra schema.

16.10 Configuring Web Services to Prevent Denial of Service and Recursive Node Attacks

Configure SCABindingProperties.xml and oracle-webservices.xml to configure Web services against denial of service attack and recursive node attack.

Configuring SCABindingProperties.xml

To prevent denial of service attacks and recursive node attacks, set the envelope size and nesting limits in SCABBindingProperties.xml as illustrated in Example 16-1.

Example 16-1 Configuring Envelope Size and Nesting Limits in SCABBindingProperties.xml

<bindingType type="ws">
        <serviceBinding>
                <bindingProperty>
                     <name>request-envelope-max-kilobytes</name>
                    <type>xs:integer</type>
                     <defaultValue>-1</defaultValue>
                 </bindingProperty>
                 <bindingProperty>
                   <name>request-envelope-nest-level</name>
                     <type>xs:integer</type>
                    <defaultValue>-1</defaultValue>
                 </bindingProperty>
        </serviceBinding> 

Configuring oracle-webservices.xml

For standalone Web services, configure the envelope size and nesting limits in oracle-webservices.xml. For example:

<request-envelope-limits kilobytes="4" nest-level="6" />

Note:

Setting the envelope and nesting limits to extremely high values, or setting no values at all, can lead to denial of service attacks.

16.11 Oracle Business Activity Monitoring (BAM) Configuration Properties

To increase or decrease the number of times BAM retries the in-flight transactions after an Oracle RAC failover, change the MaxDBNodeFailoverRetries setting from its default of 5 times to another value. However, it is a best practice to maintain the default settings for UseDBFailover and MaxDBNodeFailoverRetries. To disable BAM's Oracle RAC failover retry support, set UseDBFailover to false. (The default value for this setting is true.) For information on using these settings, see "Oracle BAM Configuration Property Reference" in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite.

16.12 Using Shared Storage for Deployment Plans and SOA Infrastructure Applications Updates

When redeploying a SOA infrastructure application or resource adapter within the SOA cluster, the deployment plan along with the application bits should be accessible to all servers in the cluster. SOA applications and resource adapters are installed using nostage deployment mode. Because the administration sever does not copy the archive files from their source location when the nostage deployment mode is selected, each server must be able to access the same deployment plan. Use the following location as for the deployment plan and applications:

ORACLE_BASE/admin/domain_name/cluster_name/dp

This directory must be accessible from all nodes in the Enterprise Deployment topology, as recommended in Chapter 4, "Preparing the File System for an Enterprise Deployment."

16.13 Troubleshooting the Topology in an Enterprise Deployment

This section describes possible issues with the SOA enterprise deployment and suggested solutions.

This section covers the following topics:

16.13.1 Access to BAM Results in HTTP Error 404

Problem: Accessing the BAM application results in the HTTP 404 error ("Not Found"). Starting the BAM server before the start of the database instance where BAM schemas reside may be the cause of this error.

Solution: Shut down the BAM instance and restart it after ensuring that the database is already up.

16.13.2 Page Not Found When Accessing soa-infra Application Through Load Balancer

Problem: You receive a 404 "page not found" message in the Web browser when you try to access the soa-infra application using the load balancer address. The error is intermittent and SOA Servers appear as Running in the WLS Administration Console.

Solution: Even when the SOA managed servers may be up and running, some of the applications contained in them may be in Admin, Prepared or other states different from Active. The soa-infra application may be unavailable while the SOA server is running. Check the deployments page in the Administration Console to verify the status of the soa-infra application. It should be in Active state. Check the SOA Server's output log for errors pertaining to the soa-infra application and try to start it from the Deployments page in the Administration Console.

16.13.3 Error While Retrieving Oracle B2B Document Definitions

Problem: You receive an error when trying to retrieve a document definition XSD from Oracle B2B. B2B resides in a cluster and is accessed through a load balancer. The B2B console reports the following:

An error occured while loading the document definitions. java.lang.IllegalArgumentException: Cluster address must be set when clustering is enabled.

Solution: Set the frontend HTTP host and port for the Oracle WebLogic cluster where Oracle B2B resides. Set the front end address for the SOA Cluster:

  1. In the WebLogic Server Administration Console, in the Change Center section, click Lock & Edit.

  2. In the left pane, choose the Environment in the Domain Structure window and then choose Clusters. The Summary of Clusters page appears.

  3. Select the WLS_SOA cluster.

  4. Select HTTP.

  5. Set the values for the following and click Save:

    • Frontend Host: soa.mycompany.com

    • Frontend HTTPS Port: 443

    • Frontend HTTP Port: 80

  6. To activate the changes, click Activate Changes in the Change Center section of the Administration Console.

  7. Restart the servers to make the Frontend Host directive in the cluster effective.

16.13.4 Soa-infra Application Fails to Start Due to Deployment Framework Issues (Coherence)

Problem: The soa-infra application fails to start after changes to the Coherence configuration for deployment have been applied. The SOA server output log reports the following:

Cluster communication initialization failed. If you are using multicast, Please make sure multicast is enabled on your network and that there is no interference on the address in use. Please see the documentation for more details.

Solutions:

  1. When using multicast instead of unicast for cluster deployments of SOA composites, a message similar to the above may appear if a multicast conflict arises when starting the soa-infra application (that is, starting the managed server on which SOA runs). These messages, which occur when Oracle Coherence throws a runtime exception, also include the details of the exception itself. If such a message appears, check the multicast configuration in your network. Verify that you can ping multicast addresses. In addition, check for other clusters that may have the same multicast address but have a different cluster name in your network, as this may cause a conflict that prevents soa-infra from starting. If multicast is not enabled in your network, you can change the deployment framework to use unicast as described in Oracle Coherence Developer's Guide for Oracle Coherence.

  2. When entering well-known address list for unicast (in server start parameters), make sure that the node's addresses entered for the localhost and clustered servers are correct. Error messages like:

    oracle.integration.platform.blocks.deploy.CompositeDeploymentCoordinatorMessages errorUnableToStartCoherence
    

    are reported in the server's output log if any of the addresses is not resolved correctly.

16.13.5 Incomplete Policy Migration After Failed Restart of SOA Server

Problem: The SOA server fails to start through the administration console before setting the Node Manager property startScriptEnabled=true. The server does not come up after the property is set. The SOA Server output log reports the following:

SEVERE: <.> Unable to Encrypt data
Unable to Encrypt data.
Check installation/post-installation steps for errors. Check for errors during SOA server startup.

ORABPEL-35010
 .
Unable to Encrypt data.
Unable to Encrypt data.
Check installation/post-installation steps for errors. Check for errors
 during SOA server startup.
 .
 at oracle.bpel.services.common.util.EncryptionService.encrypt(EncryptionService.java:56)
...

Solution: Edit the <jazn-policy> element the system-jazn-data.xml file to grant permission to bpm-services.jar:

<grant>
  <grantee>
    <codesource>
<url>file:${oracle.home}/soa/modules/oracle.soa.workflow_11.1.1/bpm-
services.jar</url>
    </codesource>
  </grantee>
  <permissions>
    <permission>
      <class>java.security.AllPermission</class>
    </permission>
  </permissions>
</grant>

16.13.6 SOA, BAM, or WMS Servers Fail to Start Due to Maximum Number of Processes Available in Database

Problem: SOA, WSM or BAM Server fails to start. The domain has been extended for new types of managed server (for example, SOA extended for BAM) or the system has been scaled up (added new servers of the same type). The SOA/BAM or WSM Server output log reports the following:

<Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "SOADataSource-rac0": Listener refused the connection with the following error:

ORA-12516, TNS:listener could not find available handler with matching protocol stack >

Solution: Verify the number of processes in the database and adjust accordingly. As the SYS user, issue the SHOW PARAMETER command:

SQL> SHOW PARAMETER processes

Set the initialization parameter using the following command:

SQL> ALTER SYSTEM SET processes=300 SCOPE=SPFILE

Restart the database.

Note:

The method that you use to change a parameter's value depends on whether the parameter is static or dynamic, and on whether your database uses a parameter file or a server parameter file. See the Oracle Database Administrator's Guide for details on parameter files, server parameter files, and how to change parameter values.

16.13.7 Administration Server Fails to Start After a Manual Failover

Problem: the Administration Server fails to start after it fails and you performed a manual failover to another node. The Administration Server output log reports the following:

<Feb 19, 2009 3:43:05 AM PST> <Warning> <EmbeddedLDAP> <BEA-171520> <Could not obtain an exclusive lock for directory: ORACLE_BASE/admin/soadomain/aserver/soadomain/servers/AdminServer/data/ldap/ldapfiles. Waiting for 10 seconds and then retrying in case existing WebLogic Server is still shutting down.>

Solution: Remove the file EmbeddedLDAP.lok file from the following directory:

ORACLE_BASE/admin/domain_name/aserver/domain_name/servers/AdminServer/data/ldap/ldapfiles/ 

.

16.13.8 Error While Activating Changes in Administration Console

Problem: Activation of changes in Administration Console fails after you have made changes to a server's start configuration. The Administration Console reports the following when clicking Activate Changes:

An error occurred during activation of changes, please see the log for details.
 [Management:141190]The commit phase of the configuration update failed with an exception:
In production mode, it's not allowed to set a clear text value to the property: PasswordEncrypted of ServerStartMBean

Solution: Either provide username/password information in the server start configuration in the Administration Console for the specific server whose configuration was being changed, or remove the <password-encrypted></password-encrypted> entry in the config.xml file (this requires a restart of the Administration Server).

16.13.9 SOA/BAM Server Not Failed Over After Server Migration

Problem: After reaching the maximum restart attempts by local Node Manager, Node Manager in the failover node tries to restart it, but the server does not come up. The server seems to be failed over as reported by Node Manager's output information. The VIP used by the SOA Server is not enabled in the failover node after Node Manager tries to migrate it (if config in the failover node does not report the VIP in any interface). Executing the command "sudo ifconfig $INTERFACE $ADDRESS $NETMASK" does not enable the IP in the failover node.

Solution: The rights and configuration for sudo execution should not prompt for a password. Verify the configuration of sudo with your system administrator so that sudo works without a password prompt.

16.13.10 SOA/BAM Server Not Reachable From Browser After Server Migration

Problem: Server migration is working (SOA/BAM Server is restarted in the failed over node) but the <Virtual Hostname>:8001/soa-infra URL is not reachable in the Web browser. The server has been "killed" in its original host and Node Manager in the failover node reports that the VIP has been migrated and the server started. The VIP used by the SOA Server cannot be pinged from the client's node (that is, the node where the browser is being used).

Solution: Update the nodemanager.properties file to include the MACBroadcast or execute a manual arping:

/sbin/arping -b -q -c 3 -A -I $INTERFACE $ADDRESS > $NullDevice 2>&1

Where $INTERFACE is the network interface where the Virtual IP is enabled and $ADDRESS is the virtual IP address.

16.13.11 SOA Server Stops Responding after Being Active and Stressed for a Period of Time

Problem: WLS_SOA starts properly and functions for a period of time, but becomes unresponsive after running an application that uses the Oracle File Adapter or Oracle FTP Adapter. The log file for the server reports the following:

<Error> <Server> <BEA-002606> <Unable to create
a server socket for listening on channel "Default". The address
X.X.X.X might be incorrect or another process is using port 8001:
@ java.net.SocketException: Too many open files.>

Solution: For composites with Oracle File and FTP Adapters, which are designed to consume a very large number of concurrent messages, set the number of open files parameter for your operating system to a greater value. For example, to set the number of open files parameter to 8192 for Linux, use the ulimit -n 8192 command. The value must be adjusted based on the expected system's load.

16.13.12 Exceptions While Performing Deploy/Purge/Import Operations in the B2B Console

Problem: Deployment of new agreements or purging/importing new metadata fails, and the output logs for the SWLS_SOA server reports "[java] MDS-02202: Content of the metadata object" for deployment or "postTransfer: MDS-00521: error while reading document..." for purge/import.

Solution: This is caused by timing and load balancing mechanism in the operation. The exceptions are unlikely to happen, so a retry of the operation typically succeeds. There is no cleanup or any other additional steps required.

16.13.13 OAM Configuration Tool Does Not Remove URLs

Problem: The OAM Configuration Tool has been used and a set of URLs was added to the policies in Oracle Access Manager. One of multiple URLs had a typo. Executing the OAM Configuration Tool again with the correct URLs completes successfully; however, when accessing Policy Manager, the incorrect URL is still there.

Solution: The OAM Configuration Tool only adds new URLs to existing policies when executed with the same app_domain name. To remove a URL, use the Policy Manager Console in OAM. Log on to the Access Administration site for OAM, click on My Policy Domains, click on the created policy domain (SOA_EDG), then on the Resources tab, and remove the incorrect URLs.

16.13.14 Redirecting of Users to Login Screen After Activating Changes in Administration Console

Problem: After configuring OHS and load balancer to access the Oracle WebLogic Administration Console, some activation changes cause the redirection to the login screen for the admin console.

Solution: This is the result of the console attempting to follow changes to port, channel, and security settings as a user makes these changes. For certain changes, the console may redirect to the Administration Server's listen address. Activation is completed regardless of the redirection. You do not have to log in again. Update the following URL and directly access the home page for the Administration Console:

soa.mycompany.com/console/console.portal

Note:

This problem does not occur if you disable tracking of the changes described in this section.

16.13.15 Redirecting of Users to Administration Console's Home Page After Activating Changes to OAM

Problem: After configuring OAM, some activation changes cause the redirection to the Administration Console's home page (instead of the context menu where the activation was performed).

Solution: This is expected when OAM SSO is configured and is the result of the redirections performed by the Administration Server. Activation is completed regardless of the redirection. If required, users may "manually" navigate again to the desired context menu.

16.13.16 Configured JOC Port Already in Use

Problem: Attempts to start a Managed Server that uses the Java Object Cache, such as OWSM or WebCenter Spaces Managed Servers, fail. The following errors appear in the logs:

J2EE JOC-058 distributed cache initialization failure
J2EE JOC-043 base exception:
J2EE JOC-803 unexpected EOF during read.

Solution: Another process is using the same port that JOC is attempting to obtain. Either stop that process, or reconfigure JOC for this cluster to use another port in the recommended port range.

16.13.17 SOA or BAM Server Fails to Start

The SOA or BAM server fails to start for the first time and reports parsing failure in config.xml.

Problem: A server that is being started for the first time using Node Manager fails to start. A message such as the following appears in the server's output log:

<Critical> <WebLogicServer> <eicfdcn35> <wls_server1> <main> <<WLS Kernel>> <> <> <1263329692528> <BEA-000386> <Server subsystem failed. Reason: weblogic.security.SecurityInitializationException: Authentication denied: Boot identity not valid; The user name and/or password from the boot identity file (boot.properties) is not valid. The boot identity may have been changed since the boot identity file was created. Please edit and update the boot identity file with the proper values of username and password. The first time the updated boot identity file is used to start the server, these new values are encrypted.

The Managed Server is trying to start for the first time, in MSI (managed server independence) mode. The Server has not been able to retrieve the appropriate configuration for the first start. The Managed Server must be able to communicate with the Administration Server on its first startup.

Solution: Make sure communication between the Administration Server´s listen address and the Managed Server´s listen address is possible (ping the Administration Server's listen address from the Managed Server's node, and telnet to the Administration Server's listen address and port). Once communication is enabled, pack and unpack the domain again to the new node or (if other servers are already running correctly in the same domain directory), delete the following directory and restart the server:

OARCLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/data/nodemanager

16.13.18 Configuring JOC for B2B Delivery Channel Updates

The default MDS change notification mechanisms in a SOA cluster propagate updates with a default frequency of 30 seconds. For faster propagation of configuration changes, such as delivery channels for a B2B agreement, and for cases where these changes are frequent, you can use Java Object Cache (JOC). Configure the distributed Java Object Cache using the configure-joc.py script located in the following directory:

MW_HOME/oracle_common/bin

Use this Python script to configure JOC in the managed servers for quick notification of changes made in B2B delivery channels. The script runs in WLST online mode. The Administration Server must be up and running.

When configuring JOC ports for Oracle products, use ports in the 9988 to 9998 range.

Note:

After configuring the Java Object Cache using the WLST commands or the configure-joc.py script, restart all affected managed servers for the configurations to take effect.

To configure the distributed Java Object Cache for Oracle SOA Suite Servers:

  1. Connect to the Administration Server using the command-line Oracle WebLogic Scripting Tool, wlst.sh, in the following directory:

    MW_HOME/oracle_common/common/bin
    

    To connect using wlst.sh:

    $ connect()
    

    Enter the Oracle WebLogic Administration user name and password when prompted.

  2. After connecting to the Administration Server using WLST, start the script using the execfile command. For example:

    wls:/mydomain/serverConfig>execfile('MW_HOME/oracle_common/bin/configure-joc.py')
    
  3. Configure JOC for all the managed servers in a given cluster.

    Enter y when the script prompts you for whether you want to specify a cluster name. Also, specify the cluster name and discover port, when prompted. This discovers all the managed servers for the given cluster and configures the JOC. The discover port is common for the entire JOC configuration across the cluster.

    For Oracle Web Services Manager: Do you want to specify a cluster name (y/n) <y>
    Enter Cluster Name : SOA_Cluster
    Enter Discover Port : 9992
    

The following is an example of the configure-joc.py for high availability environments:

execfile('MW_HOME/oracle_common/bin/configure-joc.py')
.
Enter Hostnames (eg host1,host2) : SOAHOST1VHN1,SOAHOST2VHN1
.
Do you want to specify a cluster name (y/n) <y>y
.
Enter Cluster Name : SOA_Cluster
.
Enter Discover Port : 9992
.
Enter Distribute Mode (true|false) <true> : true
.
Do you want to exclude any server(s) from JOC configuration (y/n) <n> n

You can also use the script for the following JOC configurations:

  • Configure JOC for all specified managed servers.

    Enter n when the script prompts whether you want to specify a cluster name, and also specify the managed server and discover port, when prompted. For example:

    Do you want to specify a cluster name (y/n) <y>n
    Enter Managed Server and Discover Port (WLS_WSM1:9998, WLS_WSM1:9998) : WLS_WSM1:9992,WLS_WSM2:9992
    
  • Exclude JOC configuration for some managed servers.

    The script allows you to specify the list of managed servers for which the JOC configuration "DistributeMode" will be set to 'false'. Enter 'y' when the script prompts whether you want to exclude any servers from JOC configuration, and enter the managed server names to be excluded, when prompted. For example:

    Do you want to exclude any server(s) from JOC configuration (y/n) <n>y
    Exclude Managed Server List (eg Server1,Server2) : WLS_WSM1,WLS_WSM3
    
  • Disable the distribution mode for all managed servers.

    The script allows you to disable the distribution to all the managed servers for a specified cluster. Specify 'false' when the script prompts for the distribution mode. By default, the distribution mode is set to 'true'.

  • Modify the javacahce.xml file to use the B2B server's VHN as the listen address:

    Edit the javacache.xml file for the server in question. This file is located in the following directory:

    DOMAIN_HOME/aserver/soaedg_domain/config/fmwconfig/servers/server_name
    

    Add the listen-address field as follows:

    ...
    <packet-distributor enable-router="false" startable="true" dedicated-coordinator="false">
                <listener-address host="SOAHOST1VHN1" port="9992" />
                    <listener-address host="SOAHOST1VHN1" port="9992" />
                <distributor-location host=" SOAHOST1VHN1" port="9992" ssl="true"/>
    </packet-distributor>
    ...
    

Verify JOC configuration using the CacheWatcher utility. See Oracle Fusion Middleware High Availability Guide.

You can configure the Java Object Cache (JOC) using the HA Power Tools tab in the Oracle WebLogic Administration Console as described in the Oracle Fusion Middleware High Availability Guide.

16.13.19 SOA Coherence Cluster Conflicts when Multiple Clusters Reside in the Same Node

Problem: soa-infra fails to come up when multiple soa clusters reside in the same nodes. Messages such as the following appear in the server's .out file:

<Error> <Coherence> <BEA-000000> <Oracle Coherence GE 3.6.0.4 <Error> (thread=Cluster, member=1): This senior Member(…) appears to have been disconnected from another senior Member…stopping cluster service.>

Solution: When a Coherence member restarts, it attempts to bind to the port configured in its localport setting. If this port is not available, it increments the port number (by two) and attempts to connect to that port. If multiple SOA clusters use similar range ports for coherence it is possible for a member to join a cluster with a different WKA, causing conflicts and preventing soa-infra application from starting. There are several ways to resolve this issue:

  • Set up a port range for each of the various clusters instead of incrementing the cluster port by 2. For example, 8000-8090 for cluster 1, 8091-8180 for cluster 2. This is implicit in the model recommended in this guide specified in Table 3-2 where different ranges should be used for each coherence cluster.

  • Disable port auto adjust to force the members to use their configured localhost address. This can be done via system property "tangosol.coherence.localport.adjust" for example -Dtangosol.coherence.localport.adjust=false.

  • Configure a unique cluster name for each cluster. This can be done using the system property tangosol.coherence.cluster. For example:

    -Dtangosol.coherence.cluster=SOA_Cluster1
    

For more information on these different options, refer to the coherence cluster configuration documentation at the following URL:

http://download.oracle.com/docs/cd/E24290_01/coh.371/e22837/cluster_setup.htm

16.13.20 Sudo Error Occurs During Server Migration

Problem: When running wlsifconfig for server migration, the following warning displays:

sudo: sorry, you must have a tty to run sudo

Solution: The WebLogic user ('oracle') is not allowed to run sudo in the background. To solve this, add the following line into /etc/sudoers:

Defaults:oracle !requiretty

See also, Section 14.6, "Setting Environment and Superuser Privileges for the wlsifconfig.sh Script".