14 Managing the Topology for an Exalogic Enterprise Deployment

This chapter describes some operations that you can perform after you have set up the Fusion Middleware SOA topology. These operations include monitoring, scaling, backing up your topology, and troubleshooting.

This chapter includes the following topics:

14.1 Overview of Managing the Topology

After configuring the SOA enterprise deployment, use the information in this chapter to manage the topology.

SOA applications are deployed as composites, consisting of different kinds of components. SOA composite applications include the following:

  • Service components such as Oracle Mediator for routing, BPEL processes for orchestration, human tasks for workflow approvals, spring for integrating Java interfaces into SOA composite applications, and decision services for working with business rules.

  • Binding components (services and references) for connecting SOA composite applications to external services, applications, and technologies.

These components are assembled into a single SOA composite application. This chapter offers tips for managing and troubleshooting SOA composite applications in an Enterprise Deployment Topology on Oracle Exalogic.

For information on monitoring SOA composite applications, see Monitoring SOA Composite Applications in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suit.

For information on managing SOA composite applications, see Managing SOA Composite Applications in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suit.

At some point you may need to expand the topology by scaling it up, or out. See Section 14.5, "Scaling Up the Topology (Adding Managed Servers to Existing Nodes)," and Section 14.6, "Scaling Out the Topology (Adding Managed Servers to New Nodes)" for information about the difference between scaling up and scaling out, and instructions for performing these tasks.

Back up the topology before and after any configuration changes. Section 14.8, "Backing Up the Oracle SOA Enterprise Deployment" provides information about the directories and files that should be back up to protect against failure as a result of configuration changes.

This chapter also documents solutions for possible known issues that may occur after you have configured the topology.

14.2 Tips for Deploying Composites and Artifacts in a SOA Enterprise Deployment Topology

This section describes tips for deploying composites and artifacts for a SOA enterprise deployment. See the "Deploying SOA Composite Applications" chapter in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite and Oracle Business Process Management Suit for instructions on deploying composites.

Deploy composites to a specific server address

When deploying SOA composites to a SOA enterprise deployment topology, deploy to a specific server's address and not to the load balancer address (soa.mycompany.com). Deploying to the load balancer address may require direct connection from the deployer nodes to the external load balancer address which may require additional ports to be opened in the firewalls used by the system.

Use B2B Console for deployment agreements and purge/import metadata

For B2B, deploy agreements and purge/import metadata ONLY from the GUI available in B2B console. Do not use the command line utility. Using the command line utility for these operations may cause inconsistencies and errors in the B2B system.

Additional instructions for FOD deployment

If you are deploying the SOA Fusion Order Demo, complete the deployment steps provided in the FOD's README file, and then complete the following additional steps:

  1. Change the nostage property to false in the build.xml file of the Web applications so that ear files are copied to each node. Edit the CreditCardAuthorization and OrderApprvalHumanTask build.xml files, located at FOD_dir\CreditCardAuthorization\bin and FOD_dir\OrderApprovalHumanTask\bin directories, and change the following field:

    <target name="deploy-application">
         <wldeploy action="deploy" name="${war.name}"
           source="${deploy.ear.source}" library="false"
           nostage="false"
           user="${wls.user}" password="${wls.password}"
           verbose="false" adminurl="${wls.url}"
           remote="true" upload="true"
           targets="${server.targets}" />
       </target>
    

    To:

    <target name="deploy-application">
         <wldeploy action="deploy" name="${war.name}"
           source="${deploy.ear.source}" library="false"
           nostage="true"
           user="${wls.user}" password="${wls.password}"
           verbose="false" adminurl="${wls.url}"
           remote="true" upload="true"
           targets="${server.targets}" />
       </target>
    
  2. Change the target for the Web applications so that deployments are targeted to the SOA Cluster and not to an individual server. Edit the build.properties file for FOD, located in the FOD_Dir/bin directory, and change the following field:

    # wls target server (for shiphome set to server_soa, for ADRS use AdminServer) 
    server.targets=SOA_Cluster (the SOA cluster name in your SOA EDG)
    
  3. Change the JMS seed templates so that instead of regular Destinations, Uniform Distributed Destinations are used and the JMS artifacts are targeted to the Enterprise Deployment JMS Modules. Edit the createJMSResources.seed file, located in the FOD_DIR\bin\templates directory, and change:

    # lookup the SOAJMSModule - it's a system resource
         jmsSOASystemResource = lookup("SOAJMSModule","JMSSystemResource")
    
         jmsResource = jmsSOASystemResource.getJMSResource()
        
         cfbean = jmsResource.lookupConnectionFactory('DemoSupplierTopicCF')
         if cfbean is None:
             print "Creating DemoSupplierTopicCF connection factory"
             demoConnectionFactory =
     jmsResource.createConnectionFactory('DemoSupplierTopicCF')
             demoConnectionFactory.setJNDIName('jms/DemoSupplierTopicCF')
             demoConnectionFactory.setSubDeploymentName('SOASubDeployment')
     .
         topicbean = jmsResource.lookupTopic('DemoSupplierTopic')
         if topicbean is None:
             print "Creating DemoSupplierTopic jms topic"
             demoJMSTopic = jmsResource.createTopic("DemoSupplierTopic")
             demoJMSTopic.setJNDIName('jms/DemoSupplierTopic')
             demoJMSTopic.setSubDeploymentName('SOASubDeployment')
    

    To:

    jmsSOASystemResource = lookup("SOAJMSModule","JMSSystemResource")
    
    jmsResource = jmsSOASystemResource.getJMSResource()
    
     topicbean=jmsResource.lookupTopic('DemoSupplierTopic_UDD')
    
    if topicbean is None: 
             print "Creating DemoSupplierTopicC jms topic"
             #create a udd - so clustering is automatically working and done
             demoJMSTopic = jmsResource.createUniformDistributedTopic("DemoSupplierTopic_UDD")
    
             demoJMSTopic.setJNDIName('@jms.topic.jndi@')
             #Replace the subdeployment name with the one that appears in the WLS AdminConsole as listed for the SOAJMSModule
    
             demoJMSTopic.setSubDeploymentName()
    
    else: print "Found DemoSupplierTopic_UDD topic – noop"
    

    Notice that ideally you should use a separate deployment module for the FOD JMS resources.

  4. Update the managed.server.host entry in the build.properties file to the EoIB listen address of one of the two SOA servers.

  5. Update the admin.server.host entry in build.properties to the EoIB listen address of the Administration Server.

14.3 Managing Space in the SOA Infrastructure Database

Although not all composites may use the database frequently, the service engines generate a considerable amount of data in the CUBE_INSTANCE and MEDIATOR_INSTANCE schemas. Lack of space in the database may prevent SOA composites from functioning.

To manage space in the SOA infrastructure database:

  • Watch for generic errors, such as "oracle.fabric.common.FabricInvocationException" in the Oracle Enterprise Manager Fusion Middleware Control console (dashboard for instances).

  • Search in the SOA server's logs for errors, such as:

    Error Code: 1691
    ...
    ORA-01691: unable to extend lob segment
    SOAINFRA.SYS_LOB0000108469C00017$$ by 128 in tablespace SOAINFRA
    

    These messages are typically indicators of space issues in the database that may likely require adding more data files or more space to the existing files. The SOA Database Administrator should determine the extension policy and parameters to be used when adding space.

  • Purge old composite instances to reduce the SOA Infrastructure database's size. Oracle does not recommend using the Oracle Enterprise Manager Fusion Middleware Control for this type of operation. In most cases the operations cause a transaction time out. There are specific packages provided with the Repository Creation Utility to purge instances. For example:

    DECLARE 
      FILTER INSTANCE_FILTER := INSTANCE_FILTER(); 
     
       MAX_INSTANCES NUMBER; 
      DELETED_INSTANCES NUMBER; 
      PURGE_PARTITIONED_DATA BOOLEAN := TRUE; 
     BEGIN 
       . 
      FILTER.COMPOSITE_PARTITION_NAME:='default'; 
      FILTER.COMPOSITE_NAME := 'FlatStructure'; 
      FILTER.COMPOSITE_REVISION := '10.0';   
      FILTER.STATE := fabric. STATE_UNKNOWN; 
      FILTER.MIN_CREATED_DATE := to_timestamp('2010-09-07','YYYY-MM-DD'); 
      FILTER.MAX_CREATED_DATE := to_timestamp('2010-09-08','YYYY-MM-DD'); 
      MAX_INSTANCES := 1000; 
     . 
      DELETED_INSTANCES := FABRIC.DELETE_COMPOSITE_INSTANCES( 
        FILTER => FILTER, 
        MAX_INSTANCES => MAX_INSTANCES, 
        PURGE_PARTITIONED_DATA => PURGE_PARTITIONED_DATA 
      );
    

    This deletes the first 1000 instances of the FlatStructure composite (version 10) created between '2010-09-07' and '2010-09-08' that are in "UNKNOWN" state. For more information on the possible operations included in the SQL packages provided, see "Managing SOA Composite Applications" in the Oracle Fusion Middleware Administrator's Guide for Oracle SOA Suite. Always use the scripts provided for a correct purge. Deleting rows in just the composite_dn table may leave dangling references in other tables used by the Oracle Fusion Middleware SOA Infrastructure. For a more detailed explanation, refer to the SOA 11g Database Growth Management Strategy paper in the Oracle FMW MAA site.

14.4 Configuring UMS Drivers

UMS driver configuration is not automatically propagated in a SOA cluster. To propagate UMS driver configuration in a cluster:

  • Apply the UMS driver configuration in each server in the Enterprise Deployment topology that is using the driver.

  • If you are using server migration, servers are moved to a different node's domain directory. Pre-create the UMS driver configuration in the failover node. The UMS driver configuration file is located in the following directory:

    MSERVER_HOME//servers/server_name/tmp/_WL_user/ums_driver_name/*/configuration/driverconfig.xml
    

    Where '*' represents a directory name that is randomly generated by Oracle WebLogic Server during deployment. For example, 3682yq.

Create the UMS driver configuration file in preparation for possible failovers by forcing a server migration, and copy the file from the source node.

For example, to create the file for BAM:

  1. Configure the driver for WLS_BAM1 in BAMHOST1.

  2. Force a failover of WLS_BAM1 to BAMHOST2. Verify the following directory structure for the UMS driver configuration in the failover node:

    cd MSERVER_HOME/servers/server_name/tmp/_WL_user/ums_driver_name/*/configuration/
    

    (where '*' represents a directory whose name is randomly generated by WLS during deployment, for example, "3682yq").

  3. Do a remote copy of the driver configuration file from BAMHOST1 to BAMHOST2:

    BAMHOST1> scp MSERVER_HOME/servers/server_name/tmp/_WL_user/ums_driver_name/*/configuration/driverconfig.xml 
    oracle@BAMHOST2:MSERVER_HOME/servers/server_name/tmp/_WL_user/ums_driver_name/*/configuration/
    
  4. Restart the driver for these changes to take effect.

    To restart the driver:

    1. Log on to the Oracle WebLogic Administration Console.

    2. Expand the environment node on the navigation tree.

    3. Click on Deployments.

    4. Select the driver.

    5. Click Stop->When work completes and confirm the operation.

    6. Wait for the driver to transition to the "Prepared" state (refresh the administration console page, if required).

    7. Select the driver again, and click Start->Servicing all requests and confirm the operation.

Verify in Oracle Enterprise Manager Fusion Middleware Control that the properties for the driver have been preserved.

14.5 Scaling Up the Topology (Adding Managed Servers to Existing Nodes)

When you scale up the topology, you already have a node that runs a managed server that is configured with Fusion Middleware components, or a managed server with WSM-PM. The node contains a WebLogic Server home and an Oracle Fusion Middleware SOA home in shared storage. Use these existing installations (such as WebLogic Server home, Oracle Fusion Middleware home, and domain directories), when you create the new managed servers called WLS_SOA and WLS_WSM. You do not need to install WLS or SOA binaries at a new location or to run pack and unpack.

This section contains the following topics:

14.5.1 Planning for Scale Up

When you scale up a server that uses server migration, plan for your appropriate capacity and resource allocation needs. Take the following scenario for example:

  • Server1 exists in node1 and uses server migration in its cluster with server2 on node2.

  • Server3 is added to the cluster in node1 in a scale up operation. It also uses server migration.

In this scenario, a situation may occur where all servers (server1, server2, server3 and admin server) end up running in a node1 or node2. This means each node needs to be designed with enough resources to sustain the worst case scenario where all servers using server migration end in one single node (as defined in the server migration candidate machine configuration).

14.5.2 Scale-up Procedure for Oracle SOA

To scale up the SOA topology:

  1. Configure a TX persistent store for the new server in a location visible from the other nodes and according the shared storage recommendations provided in this guide.

    1. From the Administration Console, select Server_name and then the Services tab.

    2. Under Default Store, in Directory, enter the path to the directory where the data files are stored:

      ASERVER_HOME/tlogs
      
  2. Using the Oracle WebLogic Server Administration Console, clone WLS_SOA1 or WLS_WSM1 into a new managed server. The source managed server to clone should be one that already exists on the node where you want to run the new managed server.

    To clone a managed server:

    1. From the Domain Structure window of the Oracle WebLogic Server Administration Console, expand the Environment node and then Servers. The Summary of Servers page appears.

    2. Click Lock & Edit and select the managed server that you want to clone (for example, WLS_SOA1).

    3. Click Clone.

    4. Name the new managed server WLS_SOAn, where n is a number that identifies the new managed server. In this case, you are adding a new server to Node 1, where WLS_SOA1 was running.

    For the remainder of the steps, you are adding a new server to SOAHOST1, which is already running WLS_SOA1.

  3. For the listen address, assign the host name or IP to use for this new managed server. For the SOA Servers' listen address, assign a new floating IPoIB host name to use for this new managed server. This is the default listen address for the new server and is an Exalogic-rack internal floating address. The virtual IP should be different from the one used by the managed server that is already running.

    Note:

    For WLS_WSM servers, you can use a different port and the same listen address used for the existing WSM server, since WSM servers do not use server migration. Run the Java Object Cache configuration utility again to include the new server in the JOC distributed cache as described in Section 8.17, "Configuring the Java Object Cache for Oracle WSM." You can use the same discover port for multiple WLS_WSM servers in the same node. Repeat the steps provided in Section 8.17 for each WLS_WSM server and the server list is updated.

  4. Create JMS servers for SOA and UMS on the new managed server.

    Note:

    You do not have to create JMS servers for SOA and UMS on the new managed server if you are scaling up the WLS_WSM managed server. This procedure is required only if you are scaling up the WLS_SOA managed servers.

    To create the JMS servers for SOA and UMS:

    1. Use the Oracle WebLogic Server Administration Console to create two new persistent stores named SOAJMSFileStore_n and PS6SOAJMSFileStore_auto_N for the new SOAJMSServer and PS6SOAJMSServer_auto_n JMS servers (which will be created in a later step). Specify the path for the store as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment" as the directory for the JMS persistent stores:

      ASERVER_HOME/jms
      
    2. Create a two new JMS servers for SOA named SOAJMSServer_n and PS6SOAJMSServer_auto_n . Use the SOAJMSFileStore_n and PS6SOAJMSFileStore_auto_N for these JMS servers. Target the JMS servers to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMS JMS server (which will be created in a later step) and name it, for example, UMSJMSFileStore_N. Specify the path for the store as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment" as the directory for the JMS persistent stores:

      ASERVER_HOME/jms
      

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    4. Create a new JMS Server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N server to the recently created managed server (WLS_SOAn).

    5. For BPM Systems only: Create two new persistent stores named BPMJMSFileStore_n and AGJMSFileStore_auto_n for the new BPMJMSServer_N and AGJMSServer_auto_n JMS servers (which will be created in a later step). Specify the path for the store as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment" as the directory for the JMS persistent stores:

      ASERVER_HOME/jms
      

      Note:

      This directory must exist before the managed server is started, or the start operation fails.

      You can also assign SOAJMSFileStore_N as store for the new BPM JMS Servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    6. For BPM systems only: Create two new JMS Servers for BPM named BPMJMSServer_N and AGJMSServer_auto_n. Use the BPMJMSFileStore_N and AGJMSFileStore_auto_n for these JMS Servers. Target these servers to the recently created managed server (WLS_SOAn).

    7. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    8. Update the SubDeployment Targets for SOA, UMS, and BPM JMS Modules (if applicable) to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule, for BPM: BPMJMSModule and for UMS: UMSSYtemResourceor SOA: SOAJMSModule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, or UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N, for BPM add BPMJMSServer_N ).

  5. Configuring Oracle Coherence for deploying composites for the new server as described in Section 9.4, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field must be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOST1-PRIV-Vn
    
  6. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example:

      SOAHOST1-PRIV-V1:8011,SOAHOST2-PRIV-V1:8001,SOAHOST1-PRIV-Vn

    5. Save and activate the changes.

  7. Disable host name verification for the new managed server.

    Before starting and verifying the WLS_SOAn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. In the Domain Structure window, expand the Environment node and then click Servers.

      The Summary of Servers page appears.

    3. Click the name of the server (represented as a hyperlink) in the Name column of the table for which you want to configure migration.

      The settings page for the selected server appears.

    4. Click the SSL tab, and click Advanced.

    5. Set Hostname Verification to None.

    6. Click Save.

      Note:

      Add new virtual IP addresses to key stores and change server identity (private key alias) when you are using host verification.

  8. Create the appropriate HTTP, T3 and Replication channels for the new server. See sections Section 9.6, "Configuring Network Channels for HTTP and T3 Clients Through EoIB," and Section 9.11, "Enabling Cluster-Level Session Replication Enhancements."

    After cloning, the managed server channels' listed server listeners will be incorrect. Enter the appropriate server listeners.

  9. Add the new server's listen address to the origin-server-pool-1 in Oracle Traffic Director. See section Section 7.7, "Defining Oracle Traffic Director Virtual Servers for an Exalogic Enterprise Deployment" for details.

  10. Reconfigure the JMS Adapter with the new server using the FactoryProperties field in the Administration Console. Click on the corresponding cell under the Property value and enter the following:

    java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory;java.naming.provider.url=t3://SOAHOST1-PRIV-V1:8001,SOAHOST2-PRIV-V1:8001,SOAHOST1-PRIV-Vn:8001;java.naming.security.principal=weblogic;java.naming.security.credentials=weblogic
    

    Click Save and Activate.

    For more information about changing the FactoryProperties value, see section Section 9.12.2, "Enabling High Availability for Oracle JMS Adapters."

  11. Configure server migration for the new managed server. To configure server migration using the Oracle WebLogic Server Administration Console:

    Note:

    Because this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration that includes netmask, interface, wlsifconfig script superuser privileges, and so on. The floating IP for the new SOA managed server should also be already present.

    1. In the Domain Structure window, expand the Environment node and then click Servers. The Summary of Servers page appears.

    2. Click the name of the server (represented as a hyperlink) in Name column of the table for which you want to configure migration.

      The settings page for the selected server appears.

    3. Click the Migration subtab.

    4. In the Migration Configuration section, select the servers that participate in migration in the Available window by clicking the right arrow. Select the same migration targets as for the servers that already exist on the node.

      For example, for new managed servers on SOAHOST1, which is already running WLS_SOA1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_SOA2, select SOAHOST1.

      Note:

      The appropriate resources must be available to run the managed servers concurrently during migration.

    5. Choose the Automatic Server Migration Enabled option and click Save.

      This enables the Node Manager to start a failed server on the target node automatically.

    6. If the new server's listen address does not fall into the range defined in nodemanager.properties, update it accordingly:

      bond0=SOAHOST1-PRIV-V1-ip- soahost1-priv-vn-ip,NetMask=255.255.248.0
      

      For more information see Section 12.2.2, "Editing the Node Manager Property File."

    7. Restart the Administration Server, managed servers, and Node Manager.

      To restart the Administration Server, use the procedure in Section 8.5.3, "Starting the Administration Server on SOAHOST1."

  12. Test server migration for this new server. To test migration, perform the following from the node where you added the new server:

    1. Stop the WLS_SOAn managed server using the following command:

      kill -9 pid
      

      You can identify the PID (process ID) of the node using the following command:

      ps -ef | grep WLS_SOAn
      
    2. Monitor the Node Manager Console for a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to attempt a second restart of WLS_SOAn.

      Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again.

      Node Manager logs a message indicating that the server will not be restarted again locally.

14.5.3 Scale-up Procedure for Oracle Service Bus

You can scale up the Oracle Service Bus servers by adding new managed servers to nodes that are already running one or more managed servers.

Prerequisites

Before scaling up you Oracle Service Bus servers, review the following prerequisites:

  • You already have a cluster that runs managed servers configured with Oracle Service Bus components.

  • The nodes contain Middleware home, an Oracle HOME (SOA and Oracle Service Bus) and a domain directory for existing managed servers.

  • The source managed server you clone already exists on the node where you want to run the new managed server.

You can use the existing installations (the Middleware home, and domain directories) for creating new WLS_OSB servers. You do not need to install SOA or Oracle Service Bus binaries in a new location, or run pack and unpack.

To scale up the Oracle Service Bus servers:

  1. Configure a TX persistent store for the new server in a location visible from the other nodes and according the shared storage recommendations provided in this guide.

    1. From the Administration Console, select Server_name and then the Services tab.

    2. Under Default Store, in Directory, enter the path to the directory where the data files are stored:

      ASERVER_HOME/tlogs
      
  2. Using the Administration Console, clone WLS_OSBn into a new managed server:

    1. Select Environment and then Servers.

    2. Select the managed server that you want to clone (for example, WLS_OSB1).

    3. Select Clone.

      Name the new managed server WLS_OSBn, where n is a number to identify the new managed server.

      For these steps you are adding a new server to SOAHOST1, which is already running WLS_OSB1.

  3. For the servers' listen address, assign a new floating IPoIB host name to use for this new managed server. This is the default listen address for the new OSB server and is an Exalogic-rack internal floating address. This virtual hostname should be different from the one used by the managed server that is already running.

    To set the managed server listen address:

    1. Log into the Oracle WebLogic Server Administration Console.

    2. In the Change Center, click Lock & Edit.

    3. In the Domain Structure window expand the Environment node.

    4. Click Servers.

      The Summary of Servers page appears.

    5. In the Names column of the table, select the managed server with the listen address you want to update.

      The Settings page for that managed server appears.

    6. Set the Listen Address to SOAHOST1-PRIV-Vn and click Save.

      Restart the managed server for the change to take effect.

  4. Update the cluster address to include the new server:

    1. In the Administration console, select Environment, and then Cluster.

    2. Click the OSB_Cluster server.

      The Settings Screen for the OSB_Cluster appears.

    3. In the Change Center, click Lock & Edit.

    4. Add the new server's address and port to the Cluster Address field. For example:

      SOAHOST1-PRIV-V2:8011,SOAHOST2-PRIV-V2:8011,SOAHOST1-PRIV-VN:8011
      
  5. Create the appropriate HTTP and T3 channels for the new server.

    After cloning, the managed server channels' listed server listeners will be incorrect. Enter the appropriate server listeners.

    For more information, section Section 9.6, "Configuring Network Channels for HTTP and T3 Clients Through EoIB."

  6. Add the new server's listen address to the osb-pool Oracle Traffic Director.

    For more information, see Section 7.7, "Defining Oracle Traffic Director Virtual Servers for an Exalogic Enterprise Deployment."

  7. If your Oracle Service Bus configuration includes one or more business services that use JMS request/response functionality, perform the following procedure using the Oracle Service Bus Console after adding the new managed server to the cluster:

    1. In the Change Center, click Create to create a session.

    2. Using the Project Explorer, locate and select a business service that uses JMS request/response.

      Business services of this type display Messaging Service as their Service Type.

    3. At the bottom of the View Details page, click Edit.

    4. If there is a cluster address in the endpoint URI, add the new server to the cluster address.

    5. In the Edit a Business Service - Summary page, click Save.

    6. Repeat the previous steps for each remaining business service that uses JMS request/response.

    7. In the Change Center, click Activate.

    8. Restart the managed server.

    9. Restart the Administration Server.

      The business services are now configured for operation in the extended domain.

      Note:

      For business services that use a JMS MessageID correlation scheme, edit the connection factory settings to add an entry to the table mapping managed servers to queues. For information about configuring queues and topic destinations, see "JMS Server Targeting" in Oracle Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server.

  8. If your Oracle Service Bus configuration includes one or more proxy services that use JMS endpoints with cluster addresses, perform the following procedure using the Oracle Service Bus Console after adding the new managed server to the cluster:

    1. In the Change Center, click Create to create a session.

    2. Using the Project Explorer, locate and select a proxy service that uses JMS endpoints with cluster addresses.

    3. At the bottom of the View Details page, click Edit.

    4. If there is a cluster address in the endpoint URI, add the new server to the cluster address.

    5. On the Edit a Proxy Service - Summary page, click Save.

    6. Repeat the previous steps for each remaining proxy service that uses JMS endpoints with cluster addresses.

    7. In the Change Center, click Activate.

    8. Restart the managed server.

      The proxy services are now configured for operation in the extended domain.

  9. Update the Oracle Service Bus result cache Coherence configuration for the new server:

    1. Log into Oracle WebLogic Server Administration Console.

    2. In the Change Center, click Lock & Edit.

    3. In the Domain Structure window, expand the Environment node.

    4. Click Servers.

      The Summary of Servers page appears.

    5. Click the name of the server (a hyperlink) in the Name column of the table.

      The settings page for the selected server appears.

    6. Click the Server Start tab.

      Enter the following for WLS_OSBn (on a single line, without a carriage returns):

      -DOSB.coherence.localhost=soahost1-priv-vn -DOSB.coherence.localport=7890
      -DOSB.coherence.wka1=SOAHOST1-PRIV-V2 -DOSB.coherence.wka1.port=7890 
      -DOSB.coherence.wka2=SOAHOST2-PRIV-V2 -DOSB.coherence.wka1.port=7890
      

      Note:

      For this configuration servers WLS_OSB1 and WLS_OSB2 must be running (listening on Virtual Host Names SOAHOST1VHN and SOAHOST2VHN as used in the rest of the guide) when WLS_OSBn is started. This allows WLS_OSBn to join the coherence cluster started by either WLS_OSB1 or WLS_OSB2 using the WKA addresses specified. In addition, make sure WLS_OSB1 and WLS_OSB2 are started before WLS_OSBn is started when all three servers are restarted. This ensures WLS_OSBn joins the cluster started by one of WLS_OSB1 or WLS_OSB2. If the order in which the servers start is not important, add the host and port for WLS_OSBn as WKA for WLS_OSB1 and WLS_OSB2, and also add WLS_OSBn as WKA for WLS_OSBn.

    7. Save and activate the changes.

      Restart the Oracle Service Bus servers for the changes to take effect.

  10. Reconfigure the JMS Adapter with the new server using the FactoryProperties field in the Administration Console. Click on the corresponding cell under the Property value and enter the following:

    java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory;java.naming.provider.url=t3://SOAHOST1-PRIV-V2:8011,SOAHOST2-PRIV-V2:8011,SOAHOSTn-PRIV-V1:8011;java.naming.security.principal=weblogic;java.naming.security.credentials=weblogic1
    

    Click Save and Activate.

  11. Create JMS Servers and persistent stores for Oracle Service Bus reporting/internal destinations on the new managed server.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new WseeJMSServer and name it, for example, OSB_rep_JMSFileStore_N. Specify the path for the store. This should be a directory on shared storage, as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment." For example:

      ASERVER_HOME/jms/
      

      Target the store the new cloned server (WLS_OSBn).

    2. Create a new JMS Server for Oracle Service Bus, for example, OSB_rep_JMSServer_N. Use the OSB_rep_JMSFileStore_N for this JMSServer. Target the OSB_rep_JMSServer_N Server to the recently created Managed Server (WLS_OSBn).

    3. Update the SubDeployment targets for the "jmsResources" Oracle Service Bus JMS Module to include the recently created OSB JMS Server:

      Expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears.

      Click jmsResources (a hyperlink in the Names column of the table). The Settings page for jmsResources appears.

      Click the SubDeployments tab. The subdeployment module for jmsresources appears.

      Note:

      This subdeployment module name for destinations is a random name in the form of wlsbJMSServerXXXXXX resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_OSB1 and WLS_OSB2).

      Click the wlsbJMSServerXXXXXX subdeployment and update the targets to include the new OSB_rep_JMSServer_n server.

  12. Create JMS Servers, persistent stores and destinations for OSB JAX-RPC on the new managed server.

    Note:

    WebLogic Advanced Web Services for JAX-RPC Extension uses regular (non-distributed) destinations to ensure that a locally processed request on a service gets enqueued only to a local member.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new WseeJMSServer and name it, for example, Wsee_rpc_JMSFileStore_N. Specify the path for the store. This should be a directory on shared storage, as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment"

    2. Create a new JMS Server for OSB JAX-RPC, for example, OSB_rpc_JMSServer_N. Use the Wsee_rpc_JMSFileStore_N for this JMSServer. Target the OSB_rpc_JMSServer_N Server to the recently created Managed Server (WLS_OSBn).

    3. Update the WseeJMSModule OSB JMS Module with destinations and the recently created OSB JMS Server by expanding the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click WseeJmsModule (a hyperlink in the Names column of the table). The Settings page for WseeJmsModule appears. Follow steps d through v to complete this step.

    4. In the Change Center, click Lock & Edit and click New.

    5. Select Queue and click Save.

    6. Click Create a New Subdeployment.

    7. Accept the default name and click OK.

    8. Select OSB_rpc_JMSServer_n as the target and click Finish.

    9. Update the local JNDI name for the destination:

      In the Change Center, click Lock & Edit.

      In the Settings for the WseeJmsModule page, click the DefaultCallbackQueue-WseeJmsServer_auto_n destination.

      In the general Configuration tab, click Advanced.

      Update the local JNDI name to weblogic.wsee.DefaultCallbackQueue.

    10. Repeat steps d through h for the DefaultQueue-WseeJmsServer_auto_n queue, using weblogic.wsee.DefaultQueue-WseeJmsServer_auto_n as the JNDi name and weblogic.wsee.DefaultQueue as the local JNDI name.

  13. Create a new SAF agent and target it to the newly added managed server:

    1. In the Oracle WebLogic Server Administration Console, expand Services, Messaging, and then Store-and-Forward Agents

    2. Add a new SAF agent ReliableWseeSAFAgent_auto_N.

    3. Select persistent store Wsee_rpc_JMSFileStore_N (persistent store created for OSB JAX-RPC).

    4. Target the SAF Agent to the new managed server and activate changes.

  14. Disable host name verification for the new managed server. Before starting and verifying the WLS_OSBn managed server, disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn. You can ignore these steps if you have already disabled hostname verification for the source server from which the new server has been cloned (the hostname verification settings is propagated to the cloned server).

    To disable host name verification:

    1. In the Oracle Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window and click Servers.

      The Summary of Servers page appears.

    3. Select WLS_OSBn in the Names column of the table.

      The Settings page for the server appears.

    4. Click the SSL tab and click Advanced.

    5. Set Hostname Verification to None and click Save.

  15. If it is not already started, start the Node Manager on the node. To start the Node Manager, use the installation in shared storage from the existing nodes as follows:

    SOAHOSTN> WL_HOME/server/bin/startNodeManager
    
  16. Start and test the new managed server from the Administration Console.

    1. Shut down the existing managed servers in the cluster.

    2. Ensure that the newly created managed server, WLS_OSBn, is up.

    3. Access the application on the newly created managed server using the following URL:

      http://vip:port/sbinspection.wsil
      
  17. Configure Server Migration for the new managed server.

    Note:

    Since this is a scale-up operation, the node should already contain a Node Manager and environment configured for server migration. The floating IP for the new Oracle Service Bus managed server should already be present.

    To configure server migration:

    1. Log into the Administration Console.

    2. In the left pane, expand Environment and select Servers.

    3. Select the name of the new managed server for which you want to configure migration.

    4. Click the Migration tab.

    5. In the Available field, in the Migration Configuration section, select the machines to which migration is allowed and click the right arrow.

    6. Select the same migration targets used for the servers that already exist on the node.

      For example, for new managed servers on SOAHOST1, which is already running WLS_OSB1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_OSB2, select SOAHOST1.

      Make sure the appropriate resources are available to run the managed servers concurrently during migration.

    7. Select the Automatic Server Migration Enabled option and click Save.

      This enables the Node Manager to start a failed server on the target node automatically.

    8. If the new server's listen address does not fall into the range defined in nodemanager.properties, update it accordingly

      bond0=SOAHOST1-PRIV-V1-ip- soahost1-priv-vn-ip,NetMask=255.255.248.0
      

      For more information, see Section 13.4, "Editing Node Manager's Properties File."

    9. Restart the Administration Server, managed servers, and Node Manager.

  18. Test server migration for this new server from the node where you added the new server:

    1. Stop the WLS_OSBn managed server by running the following command on the PID (process ID) of the managed server:

      kill -9 pid
      

      You can identify the PID of the node using the following command:

      ps -ef | grep WLS_OSBn
      

      Note:

      For Windows, you can terminate the Managed Server using the taskkill command. For example:

      taskkill /f /pid pid
      

      Where pid is the process ID of the Managed Server.

      To determine the process ID of the WLS_OSBn Managed Server, run the following command:

      MW_HOME\jrockit_160_20_D1.0.1-2124\bin\jps -l -v
      
    2. In the Node Manager Console you can see a message appears indicating that WLS_OSBn's floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of WLS_OSBn.

      Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again.

      Node Manager logs a message indicating that the server will not be restarted again locally.

      Note:

      After a server is migrated, to fail it back to its original node, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager starts the managed server on the machine to which it was originally assigned.

14.6 Scaling Out the Topology (Adding Managed Servers to New Nodes)

When you scale out the topology, you add new managed servers configured with SOA, OSB, and or WSM-PM to new nodes.

This section contains the following topics:

14.6.1 Prerequisites for Scaling Out the Topology

Before performing the steps in this section, check that you meet these requirements:

  • There must be existing nodes running managed servers configured with SOA and WSM-PM within the topology

  • The new node can access the existing home directories for WebLogic Server and SOA. (Use the existing installations in shared storage for creating a new WLS_SOA or WLS_WSM managed server. You do not need to install WebLogic Server or SOA binaries in a new location but you do need to run pack and unpack to bootstrap the domain configuration in the new node.)

  • When an ORACLE_HOME or WL_HOME is shared by multiple servers in different nodes, keep the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and "attach" an installation in a shared storage to it, use the attachHome.sh script in the following location:

    ORACLE_HOME/oui/bin/
    

    To update the Middleware home list to add or remove a WL_HOME, edit the beahomelist file located in the following directory:

    MW_HOME/bea
    

14.6.2 Scale-out Procedure for the Oracle SOA

To scale out the topology:

  1. Configure a TX persistent store for the new server in a location visible from the other nodes and according the shared storage recommendations provided in this guide.

    1. From the Administration Console, select Server_name and then the Services tab.

    2. Under Default Store, in Directory, enter the path to the directory where the data files are stored:

      ASERVER_HOME/tlogs
      
  2. On the new node, mount the existing Fusion Middleware Home, and the rest of the private and shared mounts indicated in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment."

  3. To attach ORACLE_HOME in shared storage to the local Oracle Inventory, execute the following command:

    SOAHOSTn>cd ORACLE_COMMON_HOME/oui/bin/attachHome.sh
    SOAHOSTn>./attachHome.sh -jreLoc MSERVER_HOME/jrockit_160_<version>
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the beahomelist file and add MW_HOME to it. The beahomelist file is located in the following directory:

    MW_HOME/bea
    
  4. Log in to the Oracle WebLogic Administration Console.

  5. Create a new machine for the new node that will be used, and add the machine to the domain.

  6. Update the machine's Node Manager's address to map the private IPoIB of the node that is being used for scale out.

  7. Use the Oracle WebLogic Server Administration Console to clone WLS_SOA1/WLS_WSM1 into a new managed server. Name it WLS_SOAn/WLS_WSMn, where n is a number. Assign it to the new machine created above.

    Note:

    These steps assume that you are adding a new server to node n, where no managed server was running previously.

  8. Assign the host name or IP to use for the new managed server for the listen address of the managed server.

    If you are planning to use server migration for this server (which Oracle recommends) this should be the virtual IP (also called a floating IP) for the server. This virtual IP should be different from the one used for the existing managed server. For example SOAHOSTn-PRIV-V1.

  9. For WLS_WSM servers, run the Java Object Cache configuration utility again to include the new server in the JOC distributed cache as described in Section 8.17, "Configuring the Java Object Cache for Oracle WSM."

  10. Create JMS Servers for SOA and UMS on the new managed server.

    Note:

    You do not have to create JMS servers for SOA and UMS on the new managed server if you are scaling up the WSM_WSM managed server or the BAM Web Applications system. This procedure is required only if you are scaling up the WLS_SOA managed servers

    Create the JMS servers for SOA and UMS as follows:

    1. Use the Oracle WebLogic Server Administration Console to create two new persistent stores named SOAJMSFileStore_n and PS6SOAJMSFileStore_auto_N for the new SOAJMSServer and PS6SOAJMSServer_auto_n JMS servers (which will be created in a later step). Specify the path for the store as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment" as the directory for the JMS persistent stores:

      ASERVER_HOME/jms/
      
    2. Create a two new JMS servers for SOA named SOAJMSServer_n and PS6SOAJMSServer_auto_n. Use the SOAJMSFileStore_n and PS6SOAJMSFileStore_auto_N for these JMS servers. Target the JMS servers to the recently created managed server (WLS_SOAn).

    3. Create a new persistence store for the new UMSJMSServer, and name it, for example, UMSJMSFileStore_N. As the directory for the persistent store, specify the path recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment" as the directory for the JMS persistent stores:

      ASERVER_HOME/jms
      

      Note:

      It is also possible to assign SOAJMSFileStore_N as the store for the new UMS JMS servers. For the purpose of clarity and isolation, individual persistent stores are used in the following steps.

    4. Create a new JMS server for UMS: for example, UMSJMSServer_N. Use the UMSJMSFileStore_N for this JMS server. Target the UMSJMSServer_N Server to the recently created managed server (WLS_SOAn).

    5. For BPM Systems only: Create two new persistent stores named BPMJMSFileStore_n and AGJMSFileStore_auto_n for the new BPMJMSServer_N and AGJMSServer_auto_n JMS servers (which will be created in a later step). Specify the path for the store as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment" as the directory for the JMS persistent stores:

    6. For BPM systems only: Create two new JMS Servers for BPM named BPMJMSServer_N and AGJMSServer_auto_n. Use the BPMJMSFileStore_N and AGJMSFileStore_auto_n for these JMS Servers. Target these servers to the recently created managed server (WLS_SOAn).

    7. Update the SubDeployment Targets for SOA, UMS, and BPM JMS Modules (if applicable) to include the recently created JMS servers.To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule, for BPM: BPMJMSModule and for UMS: UMSSYtemResource) represented as a hyperlink in the Names column of the table. The Settings page for the module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N, for BPM add BPMJMSServer_N). Click Save and Activate.

    8. Target the UMSJMSSystemResource to the SOA_Cluster as it may have changed during extend operations. To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click UMSJMSSytemResource and open the Targets tab. Make sure all of the servers in the SOA_Cluster appear selected (including the recently cloned WLS_SOAn).

    9. Update the SubDeployment Targets for SOA, UMS, and BPM JMS Modules (if applicable) to include the recently created JMS servers.

      To do this, expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears. Click on the JMS module (for SOA: SOAJMSModule and for UMS: UMSSYtemResource, for BRM: BPMJMSModule) represented as a hyperlink in the Names column of the table. The Settings page for module appears. Open the SubDeployments tab. The subdeployment for the deployment module appears.

      Note:

      This subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

      Click on it. Add the new JMS Server (for UMS add UMSJMSServer_N, for SOA add SOAJMSServer_N, for BPM add BPMJMSServer_N).

  11. Configure Node Manager directory and properties for the new node as indicated in Section 8.5.2, "Configuring and Starting Node Manager on SOAHOST1 and SOAHOST2."

  12. Run the pack command on SOAHOST1 to create a template pack as follows:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=ASERVER_HOME
    -template=soadomaintemplateScale.jar -template_name=soa_domain_templateScale
    

    Run the unpack command on SOAHOSTn to unpack the template in the managed server domain directory as follows:

    SOAHOSTN> cd ORACLE_COMMON_HOME/common/bin
    
    SOAHOSTN> ./unpack.sh -domain=MSERVER_HOME
    -template=soadomaintemplateScale.jar
    -app_dir=APP_DIR
    

    Note:

    The configuration steps provided in this enterprise deployment topology are documented with the assumption that a private (per node) domain directory is used for each managed server.

  13. Configure Oracle Coherence for deploying composites for the new server as described in Section 9.4, "Configuring Oracle Coherence for Deploying Composites."

    Note:

    Only the localhost field needs to be changed for the server. Replace the localhost with the listen address of the new server added:

    Dtangosol.coherence.localhost=SOAHOSTnVHN1-PRIV-V1
    
  14. Reconfigure the JMS Adapter with the new server using the FactoryProperties field in the Administration Console. Click on the corresponding cell under the Property value and enter the following:

    java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory;java.naming.provider.url=t3://SOAHOST1-PRIV-V1:8001,SOAHOST2-PRIV-V1:8001,SOAHOSTn-PRIV-V1;java.naming.security.principal=weblogic;java.naming.security.credentials=weblogic1
    

    Click Save and Activate.

  15. Configure the persistent store for the new server. This should be a location visible from other nodes as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment."

    From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  16. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example:

      SOAHOST1-PRIV-V1:8011,SOAHOST2-PRIV-V1:8001,SOAHOSTn-PRIV-V1
      
    5. Save and Activate the changes.

  17. Disable host name verification for the new managed server. Before starting and verifying the WLS_SOAn managed server, you must disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn.

    If the source server from which the new one has been cloned had already disabled hostname verification, these steps are not required (the hostname verification settings is propagated to the cloned server).To disable host name verification:

    1. In the Oracle Fusion Middleware Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_SOAn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None.

    8. Click Save.

  18. Create the appropriate HTTP, T3 and Replication channels for the new server. For details, see Section 9.6, "Configuring Network Channels for HTTP and T3 Clients Through EoIB," and Section 9.11, "Enabling Cluster-Level Session Replication Enhancements."

  19. Add the new server's listen address to the origin-server-pool-1 in Oracle Traffic Director. For details, see Section 7.7, "Defining Oracle Traffic Director Virtual Servers for an Exalogic Enterprise Deployment."

    Reconfigure the JMS Adapter with the new server using the FactoryProperties field in the Administration Console. Click on the corresponding cell under the Property value and enter the following:

    java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory;java.naming.provider.url=t3://SOAHOST1-PRIV-V1:8001,SOAHOST2-PRIV-V1:8001,SOAHOSTN-PRIV-V1:8001;java.naming.security.principal=weblogic;java.naming.security.credentials=weblogic
    
  20. Click Save and Activate.

  21. Start Node Manager on the new node. To start Node Manager, use startNodeManager.sh located in the following directory:

    SOAHOSTn>  /u02/private/oracle/config/nodemanager/startNodeManager.sh 
    
  22. Start and test the new managed server from the Oracle WebLogic Server Administration Console.

    1. Ensure that the newly created managed server, WLS_SOAn, is running.

    2. Access the application from within the Exalogic rack using the following URL:

      http://SOAHOSTn-PRIV-V1:8001/soa-infra/
      

      The application should be functional.

  23. Configure server migration for the new managed server.

    Log into the Oracle WebLogic Server Administration Console and configure server migration.

    To configure server migration:

    1. Expand the Environment node in the Domain Structure windows and then choose Servers. The Summary of Servers page appears.

    2. Select the server (represented as hyperlink) for which you want to configure migration from the Names column of the table. The Setting page for that server appears.

    3. Click the Migration tab.

    4. In the Available field of the Migration Configuration section, click the right arrow to select the machines to which to allow migration.

      Note:

      Specify the least-loaded machine as the migration target for the new server. The required capacity planning must be completed so that this node has enough available resources to sustain an additional managed server.

    5. Select Automatic Server Migration Enabled. This enables Node Manager to start a failed server on the target node automatically.

    6. Click Save.

    7. Restart the Administration Server, managed servers, and the Node Manager.

      To restart the Administration Server, use the procedure in Section 8.5.3, "Starting the Administration Server on SOAHOST1."

  24. Update the cluster address to include the new server:

    1. In the Administration Console, select Environment, and then Cluster.

    2. Click the SOA_Cluster server.

      The Settings screen for the SOA_Cluster appears.

    3. Click Lock & Edit.

    4. Add the new server's address and port to the Cluster address field. For example:

      SOAHOST1-PRIV-V1:8001,SOAHOST2-PRIV-V1:8001,SOAHOSTn-PRIV1:8001
      
    5. Save and activate the changes.

  25. Test server migration for this new server from the node where you added the new server:

    1. Abruptly stop the WLS_SOAn managed server by running the following command;

      kill -9 pid
      

      You can identify the PID (process ID) of the node using the following command:

      ps -ef | grep WLS_SOAn
      
    2. In the Node Manager Console you should see a message indicating that WLS_SOA1's floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of WLS_SOAn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again. Now Node Manager should log a message indicating that the server will not be restarted again locally.

14.6.3 Scale-out Procedure for Oracle Service Bus

When you scale out the topology, you add new managed servers configured with Oracle Service Bus to the new nodes.

Prerequisites

Before scaling out the Oracle Service Bus topology, make sure you meet these prerequisites:

  • There must be existing nodes running managed servers configured with Oracle Service Bus within the topology.

  • The new node optionally can access the existing home directories for WebLogic Server and Oracle Service Bus installation. Use the existing installations in shared storage for creating a new WLS_OSB managed server. You do not need to install WebLogic Server or Oracle Service Bus binaries in every new location in this case, but you do need to run the pack and unpack commands to bootstrap the domain configuration in the new node, unless you are scaling the Oracle Service Bus server to machines containing other servers of the same domain (the SOA servers).

  • When multiple servers in different nodes share an ORACLE_HOME or WL_HOME, keep the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and attach an installation in a shared storage to it, use the attachHome.sh file located in the following directory:

    ORACLE_HOME/oui/bin/
    

    To update the Middleware home list to add or remove a WL_HOME, edit the beahomelist file located in the following directory:

    MW_HOME/bea
    

To scale out the topology:

  1. Configure a TX persistent store for the new server in a location visible from the other nodes and according the shared storage recommendations provided in this guide.

    1. From the Administration Console, select Server_name and then the Services tab.

    2. Under Default Store, in Directory, enter the path to the directory where the data files are stored:

      ASERVER_HOME/tlogs
      
  2. On the new node, mount the existing Fusion Middleware Home, and the rest of the private and shared mounts as described in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment."

  3. Attach ORACLE_HOME in shared storage to the private Oracle Inventory using the following command:

    SOAHOSTn>cd MW_HOME/soa/
    SOAHOSTn>./attachHome.sh -jreLoc MW_HOME/jrockit_160_<version>
    

    To update the Middleware home list, create (or edit, if another WebLogic installation exists in the node) the beahomelist file located in the following directory:

    MW_HOME/bea/
    

    Add MW_HOME to the list.

  4. Log in to the Oracle WebLogic Administration Console.

  5. Create a new machine for the new node that will be used, and add the machine to the domain.

  6. Update the machine's Node Manager's address to map the private IPoIB of the node that is being used for scale out.

  7. Use the Oracle WebLogic Server Administration Console to clone WLS_OSB1 into a new managed server. Name it WLS_OSBn, where n is a number, and assign it to the new machine.

    Note:

    For these steps, you are adding a new server to node n, where no managed server was running previously.

  8. For the listen address, assign the virtual host name to use for this new managed server. If you are planning to use server migration as recommended for this server, this virtual host name allows it to move to another node. The virtual host name should be different from those used by other managed servers (may be in the same or different domain) that are running in the nodes used by the OSB/SOA domain.

    1. Log into the Oracle WebLogic Server Administration Console.

    2. In the Change Center, click Lock & Edit.

    3. Expand the Environment node in the Domain Structure window.

    4. Click Servers.

      The Summary of Servers page appears.

    5. Select the managed server with listen addresses you want to update in the Names column of the table.

      The Setting page for that managed server appears.

    6. Set the Listen Address to SOAHOSTn-PRIV-V1 and click Save.

    7. Save and activate the changes.

    8. Restart the managed server.

  9. Update the cluster address to include the new server:

    1. Select Environment, and then Cluster from the Administration Console.

    2. Click the OSB_Cluster server.

      The Settings Screen for the OSB_Cluster appears.

    3. In the Change Center, click Lock & Edit.

    4. Add the new server's address and port to the Cluster Address field. For example:

      SOAHOST1-PRIV-1:8011,SOAHOST2-PRIV-1:8011,SOAHOSTn-PRIV-1:8011
      
  10. Create the appropriate HTTP and T3 channels for the new server.

    For more information, see Section 9.6, "Configuring Network Channels for HTTP and T3 Clients Through EoIB."

  11. Add the new server's listen address to the Oracle Traffic Director osb-pool.

    For more information, see Section 7.7, "Defining Oracle Traffic Director Virtual Servers for an Exalogic Enterprise Deployment."

  12. Create JMS servers and persistent stores for Oracle Service Bus reporting/internal destinations on the new managed server.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new WseeJMSServer and name it, for example, OSB_rep_JMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment."

      Note:

      This directory must exist before the managed server is started or the start operation fails.

      ASERVER_HOME/jms/OSB_rep_JMSFileStore _N
      
    2. Create a new JMS Server for Oracle Service Bus, for example, OSB_rep_JMSServer_N. Use the OSB_rep_JMSFileStore_N for this JMSServer. Target the OSB_rep_JMSServer_N Server to the recently created managed server (WLS_OSBn).

    3. Update the SubDeployment targets for the jmsresources Oracle Service Bus JMS Module to include the recently created Oracle Service Bus JMS Server:

      Expand the Services node and then expand the Messaging node.

      Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears.

      Click jmsresources (a hyperlink in the Names column of the table). The Settings page for jmsResources appears.

      Open the SubDeployments tab. The subdeployment module for jmsresources appears.

      Note:

      This subdeployment module name is a random name in the form of wlsbJMSServerXXXXXX resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_OSB1 and WLS_OSB2).

      Click the wlsbJMSServerXXXXXX subdeployment and update the targets to include the new OSB_rep_JMSServer_N server.

  13. Create JMS Servers, persistent stores and destinations for OSB JAX-RPC on the new managed server.

    Note:

    WebLogic Advanced Web Services for JAX-RPC Extension uses regular (non-distributed) destinations to ensure that a locally processed request on a service gets enqueued only to a local member.

    1. Use the Oracle WebLogic Server Administration Console to create a new persistent store for the new WseeJMSServer and name it, for example, Wsee_rpc_JMSFileStore_N. Specify the path for the store. This should be a directory on shared storage as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment."

      Note:

      This directory must exist before the managed server is started or the start operation fails.

      ASERVER_HOME/jms/Wsee_rpc_JMSFileStore_N
      
    2. Create a new JMS Server for Oracle Service Bus JAX-RPC, for example, OSB_rpc_JMSServer_N. Use the Wsee_rpc_JMSFileStore_N for this JMSServer. Target the OSB_rpc_JMSServer_N Server to the recently created Managed Server (WLS_OSBn).

    3. Update the WseeJMSModule Oracle Service Bus JMS Module with destinations and the recently created Oracle Service Bus JMS Server:

      Expand the Services node and then expand the Messaging node. Choose JMS Modules from the Domain Structure window of the Oracle WebLogic Server Administration Console. The JMS Modules page appears.

      Click WseeJmsModule (a hyperlink in the Names column of the table). The Settings page for WseeJmsModule appears.

      Follow steps d through j to complete this step.

    4. In the Change Center, click Lock & Edit and click New.

    5. Select Queue and click Next.

    6. Enter DefaultCallbackQueue-WseeJmsServer_auto_n as name for the queue.

    7. Enter weblogic.wsee.DefaultCallbackQueue-WseeJmsServer_auto_n as the JNDI name and click Next.

    8. Click Create a New Subdeployment.

    9. Accept the default name and click OK.

    10. Select OSB_rpc_JMSServer_n as the target and click Finish.

    11. Activate the changes.

    12. Update the local JNDI name for the destination:

      In the Change Center, click Lock & Edit.

      In the Settings page for WseeJmsModule, click the DefaultCallbackQueue-WseeJmsServer_auto_n destination.

      In the general Configuration tab, click Advanced.

      Update the local JNDI name to weblogic.wsee.DefaultCallbackQueue.

  14. Create a new SAF agent and target it to the newly added managed server:

    In the Oracle WebLogic Server Administration Console, expand Services, Messaging and then Store-and-Forward Agents, and add a new SAF agent, ReliableWseeSAFAgent_auto_N.

    Select persistent store Wsee_rpc_JMSFileStore_N (persistent store created for Oracle Service Bus JAX-RPC). Target the SAF Agent to the new managed server and activate changes.

  15. Create the appropriate HTTP and T3 channels for the new server.

    For more information, section Section 9.6, "Configuring Network Channels for HTTP and T3 Clients Through EoIB."

  16. Add the new server's listen address to the osb-pool Oracle Traffic Director.

    For more information, see Section 7.7, "Defining Oracle Traffic Director Virtual Servers for an Exalogic Enterprise Deployment."

  17. If your Oracle Service Bus configuration includes one or more business services that use JMS request/response functionality, follow this procedure using the Oracle Service Bus Console after adding the new managed server to the cluster:

    1. In the Change Center, click Create to create a session.

    2. Using the Project Explorer, locate and select a business service that uses JMS request/response.

      Business services of this type display Messaging Service as their Service Type.

    3. At the bottom of the View Details page, click Edit.

    4. If there is a cluster address in the endpoint URI, add the new server to the cluster address.

    5. On the Edit a Business Service - Summary page, click Save.

    6. Repeat the previous steps for each remaining business service that uses JMS request/response.

    7. In the Change Center, click Activate.

    8. Restart the managed server.

    9. Restart the Administration Server.

      The business services are now configured for operation in the extended domain.

      Note:

      For business services that use a JMS MesageID correlation scheme, edit the connection factory settings to add an entry to the table mapping managed servers to queues. For information on how to configure queues and topic destinations, see "JMS Server Targeting" in Oracle Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server.

  18. If your Oracle Service Bus configuration includes one or more proxy services that use JMS endpoints with cluster addresses, perform the following procedure using the Oracle Service Bus Console after adding the new managed server to the cluster:

    1. In the Change Center, click Create to create a session.

    2. Using the Project Explorer, locate and select a proxy service that uses JMS endpoints with cluster addresses.

    3. At the bottom of the View Details page, click Edit.

    4. If there is a cluster address in the endpoint URI, add the new server to the cluster address.

    5. On the Edit a Proxy Service - Summary page, click Save.

    6. Repeat the previous steps for each remaining proxy service that uses JMS endpoints with cluster addresses.

    7. In the Change Center, click Activate.

    8. Restart the managed server.

    The proxy services are now configured for operation in the extended domain.

  19. Update the Oracle Service Bus result cache Coherence configuration for the new server:

    1. Log into Oracle WebLogic Server Administration Console. In the Change Center, click Lock & Edit.

    2. In the Domain Structure window, expand the Environment node.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Click the name of the server (a hyperlink) in the Name column of the table.

      The settings page for the selected server appears.

    5. Click the Server Start tab.

    6. Click Advanced.

    7. Enter the following for WLS_OSBn (on a single line, without a carriage returns):

      -DOSB.coherence.localhost=SOAHOSTn-PRIV-1 -DOSB.coherence.localport=7890 
      -DOSB.coherence.wka1=SOAHOST1-PRIV-1 -DOSB.coherence.wka1.port=7890 
      -DOSB.coherence.wka2=SOAHOST2-PRIV-1 -DOSB.coherence.wka1.port=7890
      

      Note:

      For the previous configuration, servers WLS_OSB1 and WLS_OSB2 are running when WLS_OSBn starts. This allows WLS_OSBn to join the coherence cluster started by either WLS_OSB1 or WLS_OSB2 using the WKA addresses specified. In addition, make sure WLS_OSB1 and WLS_OSB2 are started before WLS_OSBn is started when starting all three servers. This ensures WLS_OSBn joins the cluster started by either WLS_OSB1 or WLS_OSB2. For a configuration where the order in which the servers are started does not matter, add the host and port for WLS_OSBn as WKA for WLS_OSB1 and WLS_OSB2, and also add WLS_OSBn as WKA for WLS_OSBn.

    8. Save and activate the changes

      Restart the Oracle Service Bus servers.

  20. Reconfigure the JMS Adapter with the new server using the FactoryProperties field in the Administration Console. Click on the corresponding cell under the Property value and enter the following:

    java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory;java.naming.provider.url=t3://soahost1-priv-1:8001,soahost2-priv-1:8001;java.naming.security.principal=weblogic;java.naming.security.credentials=weblogic1
    

    Click Save and Activate.

  21. Configure NodeManager directory and properties for the new node as indicated in Section 8.5.2, "Configuring and Starting Node Manager on SOAHOST1 and SOAHOST2."

  22. Run the pack command on SOAHOST1 to create a template pack as follows:

    cd ORACLE_COMMON_HOME/common/bin
    
    ./pack.sh -managed=true -domain=MW_HOME/user_projects/domains/soadomain/
    -template=soadomaintemplateScale.jar -template_name=soa_domain_templateScale
    

    Run the unpack command on SOAHOSTN to unpack the template in the managed server domain directory as follows:

    cd MW_HOME/soa/common/bin
     
    ./unpack.sh -domain=MSERVER_HOME/ -template=soadomaintemplateScale.jar
    
  23. Configure a TX persistent store for the new server in a location visible from other nodes as indicated in the recommendations about shared storage

    1. From the Administration Console, select the server name, and then the Services tab.

    2. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.

  24. Disable host name verification for the new managed server.

    Before starting and verifying the WLS_OSBn managed server, disable host name verification. You can re-enable it after you have configured server certificates for the communication between the Oracle WebLogic Administration Server and the Node Manager in SOAHOSTn. If you have already disabled host name verification for the source server from which the new server has been cloned, you can skip this procedure (the hostname verification setting is propagated to the cloned server).

    To disable host name verification:

    1. In the Oracle Enterprise Manager Console, select Oracle WebLogic Server Administration Console.

    2. Expand the Environment node in the Domain Structure window.

    3. Click Servers.

      The Summary of Servers page appears.

    4. Select WLS_OSBn in the Names column of the table.

      The Settings page for server appears.

    5. Click the SSL tab.

    6. Click Advanced.

    7. Set Hostname Verification to None and click Save.

  25. Start the Node Manager on the new node using the installation in shared storage from the existing nodes. Pass the host name of the new node as a parameter:

    SOAHOSTN> WL_HOME/server/bin/startNodeManager new_node_ip
    
  26. Start and test the new managed server from the Oracle WebLogic Server Administration Console:

    1. Shut down all the existing managed servers in the cluster.

    2. Ensure that the newly created managed server, WLS_OSBn, is running. Access the application on the newly created managed server:

      http://vip:port/sbinspection.wsil
      

      The application should be functional.

  27. Configure server migration for the new managed server.

    Note:

    In the previous steps you already created a private directory for Node Manager in this node. Update Node Manager properties as indicated in Chapter 13, "Configure Server Migration for an Exalogic Enterprise Deployment," considering the new server's listen address and channels.

    To configure server migration:

    1. Log into the Administration Console.

    2. In the left pane, expand Environment and select Servers.

    3. Select the server (represented as hyperlink) for which you want to configure migration from the Names column of the table.

      The Settings page for that server appears.

    4. Click the Migration tab.

    5. In the Available field, in the Migration Configuration section, select the machines to which the server is to be migrated migration and click the right arrow.

      For example, for new managed servers on SOAHOST1, which is already running WLS_OSB1, select SOAHOST2. For new managed servers on SOAHOST2, which is already running WLS_OSB2, select SOAHOST1.

      Note:

      Specify the least-loaded machine as the migration target for the new server. Complete the required capacity planning so that this node has enough available resources to sustain an additional managed server.

    6. Select the Automatic Server Migration Enabled option and click Save.

      This enables the Node Manager to start a failed server on the target node automatically.

    7. Restart the Administration Server, managed servers, and Node Manager.

  28. Test server migration for this new server from the node where you added the new server:

    1. Abruptly stop the WLS_OSBn managed server by running the following command on the PID (process ID) of the managed server:

      kill -9 pid
      

      You can identify the PID of the node using the following command:

      ps -ef | grep WLS_OSBn
      

      Note:

      For Windows, you can terminate the managed server using the taskkill command. For example:

      taskkill /f /pid pid
      

      Where pid is the process Id of the managed server.

      You can determine the process ID of the WLS_OSBn managed server using the following command:

      MW_HOME\jrockit_160_20_D1.0.1-2124\bin\jps -l -v
      
    2. In the Node Manager Console you can view a message indicating that WLS_OSBn's floating IP has been disabled.

    3. Wait for the Node Manager to try a second restart of WLS_OSBn. Node Manager waits for a fence period of 30 seconds before trying this restart.

    4. Once Node Manager restarts the server, stop it again.

      Now Node Manager logs a message indicating that the server will not be restarted again locally.

      Note:

      After a server is migrated, to fail it back to its original node/machine, stop the managed server from the Oracle WebLogic Administration Console and then start it again. The appropriate Node Manager will start the managed server on the machine to which it was originally assigned.

14.7 Verifying Manual Failover of the Administration Server

In case a node fails, you can fail over the Administration Server to another node. The following sections provide the steps to verify the failover and failback of the Administration Server from SOAHOTS1 and SOAHOSt2.

Assumptions:

  • The Administration Server is configured to listen on ADMINVHN, and not on ANY address. See step 14 in Section 8.4, "Running the Configuration Wizard on SOAHOST1 to Create a Domain."

  • These procedures assume that the two nodes use two individual domain directories, and that the directories reside in private storage or in shared storage in different volumes.

  • The Administration Server is failed over from SOAHOST1 to SOAHOST2, and the two nodes have these IPs:

    • SOAHOST1: 100.200.140.165

    • SOAHOST2: 100.200.140.205

    • ADMINVHN: 100.200.140.206. This is the Virtual IP where the Administration Server is running, assigned to ethX:Y, available in SOAHOST1 and SOAHOST2.

  • The domain directory where the Administration Server is running in SOAHOST1 is on a shared storage and is mounted also from SOAHOST2.

  • Oracle WebLogic Server and Oracle Fusion Middleware components have been installed in SOAHOST2 as described in Section 8.2, "Installing Oracle Fusion Middleware." (that is, the same paths for ORACLE_HOME and MW_HOME that exist on SOAHOST1 are also available on SOAHOST2).

This section contains the following topics:

14.7.1 Failing Over the Administration Server to a Different Node

The following procedure shows how to fail over the Administration Server to a different node (SOAHOST2), but the Administration Server will still use the same WebLogic Server machine (which is a logical machine, not a physical machine).

To fail over the Administration Server to a different node:

  1. Stop the Administration Server.

  2. Migrate IP to the second node.

    1. Run the following command as root on SOAHOST1 (where X:Y is the current interface used by ADMINVHN):

      /sbin/ifconfig bond1:Y down
      
    2. Run the following command on SOAHOST2:

      /sbin/ifconfig <interface:index> IP_Address netmask <netmask>
      

      For example:

      /sbin/ifconfig bond1:1 10.0.0.1 netmask 255.255.255.0
      

      Note:

      Ensure that the netmask and interface to be used to match the available network configuration in SOAHOST2.

  3. Update routing tables through arping, for example:

    /sbin/arping -q -U -c 3 -I bond1 10.0.0.1
    
  4. Start the Administration Server on SOAHOST2 using the procedure in Section 8.5.3, "Starting the Administration Server on SOAHOST1.".

  5. Test that you can access the Administration Server on SOAHOST2 as follows:

    1. Ensure that you can access the Oracle WebLogic Server Administration Console using the following URL:

      http://ADMINVHN:7001/console
      
    2. Check that you can access and verify the status of components in the Oracle Enterprise Manager using the following URL:

      http://ADMINVHN:7001/em
      

      Note:

      The Administration Server does not use Node Manager for failing over. After a manual failover, the machine name that appears in the Current Machine field in the Administration Console for the server is SOAHOST1, and not the failover machine, SOAHOST2. Since Node Manager does not monitor the Administration Server, the machine name that appears in the Current Machine field, is not relevant and you can ignored it.

14.7.2 Validating Access to SOAHOST2

Perform the same steps as in Section 8.11, "Validating the Administration Server Configuration." This is to check that you can access the Administration Server when it is running on SOAHOST2.

14.7.3 Failing the Administration Server Back to SOAHOST1

This step checks that you can fail back the Administration Server, that is, stop it on SOAHOST2 and run it on SOAHOST1 by migrating ADMINVHN back to SOAHOST1 node.

To migrate ADMINVHN back to SOAHOST1:

  1. Make sure the Administration Server is not running.

  2. Run the following command on SOAHOST2.

    /sbin/ifconfig bond1:N down
    
  3. Run the following command on SOAHOST1:

    /sbin/ifconfig bond1:Y 100.200.140.206 netmask 255.255.255.0
    

    Note:

    Ensure that the netmask and interface to be used match the available network configuration in SOAHOST1

  4. Update routing tables through arping. Run the following command from SOAHOST1.

    /sbin/arping -q -U -c 3 -I bond1 100.200.140.206
    
  5. Start the Administration Server again on SOAHOST1 using the procedure in Section 8.5.3, "Starting the Administration Server on SOAHOST1.".

    cd ASERVER_HOME/bin
    ./startWebLogic.sh
     
    
  6. Test that you can access the Oracle WebLogic Server Administration Console using the following URL:

    http://ADMINVHN:7001/console
    
  7. Check that you can access and verify the status of components in the Oracle Enterprise Manager using the following URL:

    http://ADMINVHN:7001/em
    

14.8 Backing Up the Oracle SOA Enterprise Deployment

Back up the topology before and after any configuration changes.

14.8.1 Backing Up the Database

Back up the database. This is a full database backup (either hot or cold) using Oracle Recovery Manager (recommended) or OS tools, such as tar for cold backups if possible.

14.8.2 Backing Up the Administration Server Domain Directory

Back up the Administration Server domain directory to save your domain configuration. The configuration files are located in the following directory:

ASERVER_HOME

To back up the Administration Server run the following command on SOAHOST1:

tar -cvpf edgdomainback.tar ASERVER_HOME

14.8.3 Backing Up the Web Tier

Backup the Web tier. The configuration files are located in the following directories:

WEB_ORACLE_ADMININSTANCE

To back up the Oracle Traffic Director Administration Server, run the following command on WEBHOST1:

tar -cvpf webasback.tar WEB_ORACLE_ADMININSTANCE

14.8.4 Backing up the Middleware Home

If a new install has modified the MW_HOME, back it up using the following command:

tar -cvpf mw_home.tar MW_HOME

14.9 Preventing Timeouts for SQLNet Connections

Much of the Enterprise Deployment production deployment involves firewalls. Because database connections are made across firewalls, Oracle recommends that the firewall be configured so that the database connection is not timed out. For Oracle Real Application Clusters (Oracle RAC), the database connections are made on Oracle RAC virtual IPs and the database listener port. You must configure the firewall to not time out such connections. If such a configuration is not possible, set the*SQLNET.EXPIRE_TIME=n* parameter in the sqlnet.ora file, located in the following directory:

ORACLE_HOME/network/admin

The n indicates the time in minutes. Set this value to less than the known value of the timeout for the network device (that is, a firewall). For Oracle RAC, set this parameter in all of the Oracle home directories.

14.10 Recovering Failed BPEL and Mediator Instances

This section describes how to check and recover failed instances in BPEL, Mediator and other service engines.

Note:

For the steps that require you to run SQL statements, you connect to the database as the soainfra schema.

  • To check for recoverable instances, run the following SQL statements in the database:

    // Find recoverable activities
    SQL> select * from work_item where state = 1 and execution_type != 1;
    
    // Find recoverable invoke messages
    SQL> select * from dlv_message where dlv_type = 1 and state = 0;
    
    // Find recoverable callback messages
    SQL> select * from dlv_message where dlv_type = 2 and (state = 0 or state = 1);
    
  • To recover failed BPEL instances:

    In Enterprise Manager, select Farm_<domain_name>, then expand SOA, then right click on soa-infra (server_soa), then Service Engine, then BPEL, and then Recovery.

  • To recover a failed Mediator composite:

    In Enterprise Manager, select Farm_<domain_name>, then expand SOA, then right-click on soa-infra (server_soa), then Service Engine, then select Mediator, and then Fault.

  • To check for rejected messages:

    SQL> select * from rejected_message
    
  • To check data in the instance tracking table, run the following SQL query:

    SQL> select ID, STATE from COMPOSITE_INSTANCE where CREATED_TIME > datetime
    

    where datetime specifies the date and time to narrow the query. For example:

    '04-NOV-09 03.20.52.902000000 PM'

    The adapter enters data into the COMPOSITE_INSTANCE table before anywhere else.

    When the adapter publishes data to the Adapter BC, the BC inserts an entry into the COMPOSITE_INSTANCE table with STATE as 0. After the message has been processed, the STATE becomes 1. In case of errors, STATE >= 2.

14.11 Configuring Web Services to Prevent Denial of Service and Recursive Node Attacks

Configure SCABindingProperties.xml and oracle-webservices.xml to configure Web services against denial of service attack and recursive node attack.

Configuring SCABindingProperties.xml

To prevent denial of service attacks and recursive node attacks, set the envelope size and nesting limits in SCABBindingProperties.xml as illustrated in Example 14-1.

Example 14-1 Configuring Envelope Size and Nesting Limits in SCABBindingProperties.xml

<bindingType type="ws">
        <serviceBinding>
                <bindingProperty>
                     <name>request-envelope-max-kilobytes</name>
                    <type>xs:integer</type>
                     <defaultValue>-1</defaultValue>
                 </bindingProperty>
                 <bindingProperty>
                   <name>request-envelope-nest-level</name>
                     <type>xs:integer</type>
                    <defaultValue>-1</defaultValue>
                 </bindingProperty>
        </serviceBinding> 

Configuring oracle-webservices.xml

For standalone Web services, configure the envelope size and nesting limits in oracle-webservices.xml. For example:

<request-envelope-limits kilobytes="4" nest-level="6" />

Note:

Setting the envelope and nesting limits to extremely high values, or setting no values at all, can lead to denial of service attacks.

14.12 Using Shared Storage for Deployment Plans and SOA Infrastructure Applications Updates

When redeploying a SOA infrastructure application or resource adapter within the SOA cluster, the deployment plan along with the application bits should be accessible to all servers in the cluster. SOA applications and resource adapters are installed using nostage deployment mode. Because the administration sever does not copy the archive files from their source location when the nostage deployment mode is selected, each server must be able to access the same deployment plan. Use the following location as for the deployment plan and applications:

 u01/oracle/config/dp/soaedg_domain

This directory must be accessible from all nodes in the Enterprise Deployment topology, as recommended in Chapter 4, "Configuring Storage for an Exalogic Enterprise Deployment."

14.13 Using External BPEL Caches for Improved HAS and Performance Isolation

This section describes how to use external PBEL caches improving HAS and Performance isolation.

This section contains the following topics:

14.13.1 Setting the Server's bpel.cache.localStorage Property

Set the server's bpel.cache.localStorage property to false.

To set the bpel.cache.localStorage property:

  1. Log into the Oracle WebLogic Server Administration Console using the following URL:

    http://ADMINVHN:7001/console
    
  2. In the Domain Structure window, expand the Environment node.

  3. Click Servers.

    The Summary of Servers page appears.

  4. Click the name of the server (WLS_SOA1 or WLS_SOA2, which are represented as hyperlinks) in the Name column of the table.

    The settings page for the selected server appears.

  5. Click Lock & Edit.

  6. Click the Server Start tab.

  7. Replace the Arguments field for WLS_SOA1 and WLS_SOA2, replace the following:

    -Dbpel.cache.localStorage=true
    

    With the following:

    -Dbpel.cache.localStorage=false
    
  8. Click Save and Activate Changes.

14.13.2 Creating Cache Configuration Files and Start Scripts

Create the appropriate cache configuration files and start scripts.

To create cache configuration files and start scripts:

  1. Create a caches directory on each node (SOAHOST1 and SOAHOST2).

    mkdir /u02/private/oracle/config/soaexa_domain/caches
    
  2. Create a bpelCacheEnv.sh file on each node in /u02/private/oracle/config/soaexa_domain/caches/. See the example in the notes section of the /u01/oracle/products/fmw/soa/bin/start-bpel-cache.sh script.

    The following example is for a cell that runs SOA server only. The servers can be customized with different resources if, for example, other servers or components are competing in the same box for memory.

    BPEL_UNICAST_WKA="SOAHOST1-PRIV-V1:8089;SOAHOST2-PRIV-V1:8089"
    MEMORY="3gb"
    INSTANCE_CACHE_SIZE="1024"
    INVOKE_MESSAGE_CACHE_SIZE="512"
    DELIVERY_MESSAGE_CACHE_SIZE="512" 
    DELIVERY_SUBSCRIPTION_CACHE_SIZE="256" 
    JAVA_HOME=/u01/oracle/products/fmw/jrockit_160_29_D1.2.0-10
    MW_HOME=/u01/oracle/products/fmw
    COHERENCE_LIB=$MW_HOME/oracle_common/modules/oracle.coherence/coherence.jar
    MW_ORA_HOME=MW_HOME/soa 
    
  3. Make a copy of start-bpel-cache.sh from MW_HOME/soa/bin to the caches directory on both nodes:

    cp MW_HOME/soa/bin/start-bpel-cache.sh  MSERVER_HOME/caches/
    

14.13.3 Starting BPEL Cache Instances

Start two cache instances by running the start-bpel-cache.sh script twice in the background.

To start the cache instances:

MSERVER_HOME/caches/start-bpel-cache.sh &
MSERVER_HOME/caches/start-bpel-cache.sh &

To identify the JVMs as running processes in the operating system, note the PIDs reported.

14.14 Troubleshooting the Topology in an Enterprise Deployment

This section describes possible issues with the SOA enterprise deployment and suggested solutions.

This section covers the following topics:

14.14.1 Page Not Found When Accessing soa-infra Application Through Load Balancer

Problem: You receive a 404 "page not found" message in the Web browser when you try to access the soa-infra application using the load balancer address. The error is intermittent and SOA Servers appear as Running in the WLS Administration Console.

Solution: Even when the SOA managed servers may be up and running, some of the applications contained in them may be in Admin, Prepared or other states different from Active. The soa-infra application may be unavailable while the SOA server is running. Check the deployments page in the Administration Console to verify the status of the soa-infra application. It should be in Active state. Check the SOA Server's output log for errors pertaining to the soa-infra application and try to start it from the Deployments page in the Administration Console.

14.14.2 Soa-infra Application Fails to Start Due to Deployment Framework Issues (Coherence)

Problem: The soa-infra application fails to start after changes to the Coherence configuration for deployment have been applied. The SOA server output log reports the following:

Cluster communication initialization failed. If you are using multicast, Please make sure multicast is enabled on your network and that there is no interference on the address in use. Please see the documentation for more details.

Solutions:

  1. When using multicast instead of unicast for cluster deployments of SOA composites, a message similar to the above may appear if a multicast conflict arises when starting the soa-infra application (that is, starting the managed server on which SOA runs). These messages, which occur when Oracle Coherence throws a runtime exception, also include the details of the exception itself. If such a message appears, check the multicast configuration in your network. Verify that you can ping multicast addresses. In addition, check for other clusters that may have the same multicast address but have a different cluster name in your network, as this may cause a conflict that prevents soa-infra from starting. If multicast is not enabled in your network, you can change the deployment framework to use unicast as described in Oracle Coherence Developer's Guide for Oracle Coherence.

  2. When entering well-known address list for unicast (in server start parameters), make sure that the node's addresses entered for the localhost and clustered servers are correct. Error messages like:

    oracle.integration.platform.blocks.deploy.CompositeDeploymentCoordinatorMessages errorUnableToStartCoherence
    

    are reported in the server's output log if any of the addresses is not resolved correctly.

14.14.3 SOA, OSB, or WMS Servers Fail to Start Due to Maximum Number of Processes Available in Database

Problem: SOA, WSM or OSB Server fails to start. The domain has been extended for new types of managed server (for example, SOA extended for OSB) or the system has been scaled up (added new servers of the same type). The SOA/OSB or WSM Server output log reports the following:

<Warning> <JDBC> <BEA-001129> <Received exception while creating connection for pool "SOADataSource-rac0": Listener refused the connection with the following error:

ORA-12516, TNS:listener could not find available handler with matching protocol stack >

Solution: Verify the number of processes in the database and adjust accordingly. As the SYS user, issue the SHOW PARAMETER command:

SQL> SHOW PARAMETER processes

Set the initialization parameter using the following command:

SQL> ALTER SYSTEM SET processes=300 SCOPE=SPFILE

Restart the database.

Note:

The method that you use to change a parameter's value depends on whether the parameter is static or dynamic, and on whether your database uses a parameter file or a server parameter file. See the Oracle Database Administrator's Guide for details on parameter files, server parameter files, and how to change parameter values.

14.14.4 Administration Server Fails to Start After a Manual Failover

Problem: the Administration Server fails to start after it fails and you performed a manual failover to another node. The Administration Server output log reports the following:

<Feb 19, 2009 3:43:05 AM PST> <Warning> <EmbeddedLDAP> <BEA-171520> <Could not obtain an exclusive lock for directory: ASERVER_HOME/servers/AdminServer/data/ldap/ldapfiles. Waiting for 10 seconds and then retrying in case existing WebLogic Server is still shutting down.>

Solution: Remove the file EmbeddedLDAP.lok file from the following directory:

ASERVER_HOME/servers/AdminServer/data/ldap/ldapfiles/ 

.

14.14.5 Error While Activating Changes in Administration Console

Problem: Activation of changes in Administration Console fails after you have made changes to a server's start configuration. The Administration Console reports the following when clicking Activate Changes:

An error occurred during activation of changes, please see the log for details.
 [Management:141190]The commit phase of the configuration update failed with an exception:
In production mode, it's not allowed to set a clear text value to the property: PasswordEncrypted of ServerStartMBean

Solution: Either provide username/password information in the server start configuration in the Administration Console for the specific server whose configuration was being changed, or remove the <password-encrypted></password-encrypted> entry in the config.xml file (this requires a restart of the Administration Server).

14.14.6 SOA/OSB Server Not Failed Over After Server Migration

Problem: After reaching the maximum restart attempts by local Node Manager, Node Manager in the failover node tries to restart it, but the server does not come up. The server seems to be failed over as reported by Node Manager's output information. The virtual IP used by the SOA Server is not enabled in the failover node after Node Manager tries to migrate it (if config in the failover node does not report the virtual IP in any interface). Executing the command "sudo ifconfig $INTERFACE $ADDRESS $NETMASK" does not enable the IP in the failover node.

Solution: The rights and configuration for sudo execution should not prompt for a password. Verify the configuration of sudo with your system administrator so that sudo works without a password prompt.

14.14.7 SOA/OSB Server Not Reachable From Browser After Server Migration

Problem: Server migration is working (SOA/OSB Server is restarted in the failed over node) but the <Virtual Hostname>:8001/soa-infra URL is not reachable in the Web browser. The server has been "killed" in its original host and Node Manager in the failover node reports that the virtual IP has been migrated and the server started. The virtual IP used by the SOA Server cannot be pinged from the client's node (that is, the node where the browser is being used).

Solution: Update the nodemanager.properties file to include the MACBroadcast or execute a manual arping:

/sbin/arping -b -q -c 3 -A -I $INTERFACE $ADDRESS > $NullDevice 2>&1

Where $INTERFACE is the network interface where the Virtual IP is enabled and $ADDRESS is the virtual IP address.

14.14.8 SOA Server Stops Responding after Being Active and Stressed for a Period of Time

Problem: WLS_SOA starts properly and functions for a period of time, but becomes unresponsive after running an application that uses the Oracle File Adapter or Oracle FTP Adapter. The log file for the server reports the following:

<Error> <Server> <BEA-002606> <Unable to create
a server socket for listening on channel "Default". The address
X.X.X.X might be incorrect or another process is using port 8001:
@ java.net.SocketException: Too many open files.>

Solution: For composites with Oracle File and FTP Adapters, which are designed to consume a very large number of concurrent messages, set the number of open files parameter for your operating system to a greater value. For example, to set the number of open files parameter to 8192 for Linux, use the ulimit -n 8192 command. The value must be adjusted based on the expected system's load.

14.14.9 Configured JOC Port Already in Use

Problem: Attempts to start a Managed Server that uses the Java Object Cache, such as OWSM or WebCenter Spaces Managed Servers, fail. The following errors appear in the logs:

J2EE JOC-058 distributed cache initialization failure
J2EE JOC-043 base exception:
J2EE JOC-803 unexpected EOF during read.

Solution: Another process is using the same port that JOC is attempting to obtain. Either stop that process, or reconfigure JOC for this cluster to use another port in the recommended port range.

14.14.10 SOA or OSB Server Fails to Start

The SOA or OSB server fails to start for the first time and reports parsing failure in config.xml.

Problem: A server that is being started for the first time using Node Manager fails to start. A message such as the following appears in the server's output log:

<Critical> <WebLogicServer> <eicfdcn35> <wls_server1> <main> <<WLS Kernel>> <> <> <1263329692528> <BEA-000386> <Server subsystem failed. Reason: weblogic.security.SecurityInitializationException: Authentication denied: Boot identity not valid; The user name and/or password from the boot identity file (boot.properties) is not valid. The boot identity may have been changed since the boot identity file was created. Please edit and update the boot identity file with the proper values of username and password. The first time the updated boot identity file is used to start the server, these new values are encrypted.

The Managed Server is trying to start for the first time, in MSI (managed server independence) mode. The Server has not been able to retrieve the appropriate configuration for the first start. The Managed Server must be able to communicate with the Administration Server on its first startup.

Solution: Make sure communication between the Administration Server´s listen address and the Managed Server´s listen address is possible (ping the Administration Server's listen address from the Managed Server's node, and telnet to the Administration Server's listen address and port). Once communication is enabled, pack and unpack the domain again to the new node or (if other servers are already running correctly in the same domain directory), delete the following directory and restart the server:

OARCLE_BASE/admin/domain_name/mserver/domain_name/servers/server_name/data/nodemanager

14.14.11 SOA Coherence Cluster Conflicts when Multiple Clusters Reside in the Same Node

Problem: soa-infra fails to come up when multiple soa clusters reside in the same nodes. Messages such as the following appear in the server's .out file:

<Error> <Coherence> <BEA-000000> <Oracle Coherence GE 3.6.0.4 <Error> (thread=Cluster, member=1): This senior Member(…) appears to have been disconnected from another senior Member…stopping cluster service.>

Solution: When a Coherence member restarts, it attempts to bind to the port configured in its localport setting. If this port is not available, it increments the port number (by two) and attempts to connect to that port. If multiple SOA clusters use similar range ports for coherence it is possible for a member to join a cluster with a different WKA, causing conflicts and preventing soa-infra application from starting. There are several ways to resolve this issue:

  • Set up a port range for each of the various clusters instead of incrementing the cluster port by 2. For example, 8000-8090 for cluster 1, 8091-8180 for cluster 2. This is implicit in the model recommended in this guide specified in Table 3-4 where different ranges should be used for each coherence cluster.

  • Disable port auto adjust to force the members to use their configured localhost address. This can be done via system property "tangosol.coherence.localport.adjust" for example -Dtangosol.coherence.localport.adjust=false.

  • Configure a unique cluster name for each cluster. This can be done using the system property tangosol.coherence.cluster. For example:

    -Dtangosol.coherence.cluster=SOA_Cluster1
    

For more information on these different options, refer to the coherence cluster configuration documentation at the following URL:

http://download.oracle.com/docs/cd/E24290_01/coh.371/e22837/cluster_setup.htm

14.14.12 Sudo Error Occurs During Server Migration

Problem: When running wlsifconfig for server migration, the following warning displays:

sudo: sorry, you must have a tty to run sudo

Solution: The WebLogic user ('oracle') is not allowed to run sudo in the background. To solve this, add the following line into /etc/sudoers:

Defaults:oracle !requiretty

14.14.13 Transaction Timeout Error

Problem: The following transaction timeout error appears in the log:

Internal Exception: java.sql.SQLException: Unexpected exception while enlisting
 XAConnection java.sql.SQLException: XA error: XAResource.XAER_NOTA start()
failed on resource 'SOADataSource_soaedg_domain': XAER_NOTA : The XID
is not valid

Solution: Check your transaction timeout settings, and be sure that the JTA transaction time out is less than the DataSource XA Transaction Timeout, which is less than the distributed_lock_timeout (at the database).

With the out of the box configuration, the SOA datasources do not set XA timeout to any value. The Set XA Transaction Timeout configuration parameter is unchecked in the WebLogic Server Administration Console. In this case, the datasources use the domain level JTA timeout which is set to 30. Also, the default distributed_lock_timeout value for the database is 60. As a result, the SOA configuration works correctly for any system where transactions are expected to have lower life expectancy than such values. Adjust these values according to the transaction times your specific operations are expected to take.

14.14.14 Exceeded Maximum Size Error Messages

Problem: When complex rules are edited and saved in a SOA cluster, error messages reporting exceeded maximum size in replication messages may show up in the server's out file. For example:

<rws3211539-v2.company.com> <WLS_SOA1> <ExecuteThread: '2' for queue:
'weblogic.socket.Muxer'> <<WLS Kernel>> <> <> <1326464549135> <BEA-000403>
<IOException occurred on socket:
Socket[addr=/10.10.10.10,port=48290,localport=8001]
weblogic.socket.MaxMessageSizeExceededException: Incoming message of size:
'10000080' bytes exceeds the configured maximum of: '10000000' bytes for
protocol: 't3'.
weblogic.socket.MaxMessageSizeExceededException: Incoming message of size:
'10000080' bytes exceeds the configured maximum of: '10000000' bytes for
protocol: 't3'

Solution: This error is due to the large size of serialized rules placed in the HTTP Sessions and replicated in the cluster. Increase the maximum message size according the size of the rules being used.

To increase the maximum message size:

  1. Log in to the WebLogic Administration Console.

  2. Select Servers, Server_name, Protocols, and then General.

  3. Modify the Maximum Message Size field as needed.