24 Scaling Procedures for an Enterprise Deployment

The scaling procedures for an enterprise deployment include scale out, scale in, scale up, and scale down. During a scale-out operation, you add managed servers to new nodes. You can remove these managed servers by performing a scale in operation. During a scale-up operation, you add managed servers to existing hosts. You can remove these servers by performing a scale-down operation.

This chapter describes the procedures to scale out/in and scale up/down static and dynamic clusters.

Scaling Out the Topology

When you scale out the topology, you add new managed servers to new nodes.

This section describes the procedures to scale out the SOA topology with static and dynamic clusters.

Scaling Out the Topology for Static Clusters

This section lists the prerequisites, explains the procedure to scale out the topology with static clusters, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink).

Prerequisites for Scaling Out

Before you perform a scale out of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with managed servers already running.

  • The new node can access the existing home directories for WebLogic Server and SOA. Use the existing installations in shared storage. You do not need to install WebLogic Server or SOA binaries in a new location. However, you do need to run pack and unpack commands to bootstrap the domain configuration in the new node.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Out a Static Cluster
The steps provided in this procedure use the SOA EDG topology as a reference. Initially there are two application tier hosts (SOAHOST1 and SOAHOST2), each running one managed server of each cluster. A new host SOAHOST3 is added to scale up the clusters with a third managed server. WLS_XYZn is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names are WLS_SOA3, WLS_OSB3, WLS_ESS3, and so on.

To scale out the cluster, complete the following steps:

  1. On the new node, mount the existing FMW Home, which should include the SOA installation and the domain directory. Ensure that the new node has access to this directory, similar to the rest of the nodes in the domain.
  2. Locate the inventory in the shared directory (for example, /u01/oracle/products/oraInventory), per Oracle’s recommendation. So you do not need to attach any home, but you may want to execute the script: /u01/oracle/products/oraInventory/createCentralInventory.sh.

    This command creates and updates the local file /etc/oraInst.loc in the new node to point it to the oraInventory location.

    If there are other inventory locations in the new host, you can use them, but /etc/oraInst.loc file must be updated accordingly for updates in each case.

  3. Update the /etc/hosts files to add the alias SOAHOSTn for the new node, as described in Verifying IP Addresses and Host Names in DNS or Hosts File.

    For example:

    10.229.188.204 host1-vip.example.com host1-vip ADMINVHN
    10.229.188.205 host1.example.com host1 SOAHOST1
    10.229.188.206 host2.example.com host2 SOAHOST2
    10.229.188.207 host3.example.com host3 WEBHOST1
    10.229.188.208 host4.example.com host4 WEBHOST2
    10.229.188.209 host5.example.com host5 SOAHOST3 
  4. Configure a per host node manager in the new node, as described in Creating a Per Host Node Manager Configuration.
  5. Log in to the Oracle WebLogic Administration Console to create a new machine:
    1. Go to Environment > Machines.
    2. Click New to create a new machine for the new node.
    3. Set Name to SOAHOSTn (or MFTHOSTn or BAMHOSTn).
    4. Set Machine OS to Linux.
    5. Click Next.
    6. Set Type to Plain.
    7. Set Listen Address to SOAHOSTn.
    8. Click Finish, and then click Activate Changes.
  6. Use the Oracle WebLogic Server Administration Console to clone the first managed server in the cluster into a new managed server.
    1. In the Change Center section, click Lock & Edit.
    2. Go to Environment > Servers.
    3. Select the first managed server in the cluster to scale out and click Clone.
    4. Use Table 24-1 to set the correspondent name, listen address, and listen port, depending on the cluster that you want to scale out.
    5. Click the new managed server, and then Configuration > General.
    6. Update the Machine from SOAHOST1 to SOAHOSTn.
    7. Click Save, and then click Activate Changes.

    Table 24-1 Details of the Cluster to be Scaled Out

    Cluster to Scale Out Server to Clone New Server Name Server Listen Address Server Listen Port

    WSM-PM_Cluster

    WLS_WSM1

    WLS_WSM3

    SOAHOST3

    7010

    SOA_Cluster

    WLS_SOA1

    WLS_SOA3

    SOAHOST3

    8001

    ESS_Cluster

    WLS_ESS1

    WLS_ESS3

    SOAHOST3

    8021

    OSB_Cluster

    WLS_OSB1

    WLS_OSB3

    SOAHOST3

    8011

    BAM_Cluster

    WLS_BAM1

    WLS_BAM3

    SOAHOST3

    9001

    MFT_Cluster

    WLS_MFT1

    WLS_MFT3

    MFTHOST3

    7500

  7. Update the deployment Staging Directory Name of the new server, as described in Modifying the Upload and Stage Directories to an Absolute Path.
  8. Create a new key certificate and update the private key alias of the server, as described in Enabling SSL Communication Between the Middle Tier and the Hardware Load Balancer.
  9. By default, the cloned server uses default store for TLOGs. If the rest of the servers in the cluster that you are scaling-out are using TLOGs in JDBC persistent store, update the TLOG persistent store of the new managed server:
    1. Go to Environment > Servers > WLS_XYZn > Configuration > Services.
    2. Expand Advanced.
    3. Change Transaction Log Store to JDBC.
    4. Change Data Source to WLSSchemaDatasource.
    5. Click Save, and then click Activate Changes.

    Use the following table to identify the clusters that use JDBC TLOGs by default:

    Table 24-2 The Name of Clusters that Use JDBC TLOGs by Default

    Cluster to Scale Out New Server Name TLOG Persistent Store

    WSM-PM_Cluster

    WLS_WSM3

    Default (file)

    SOA_Cluster

    WLS_SOA3

    JDBC

    ESS_Cluster

    WLS_ESS3

    Default (file)

    OSB_Cluster

    WLS_OSB3

    JDBC

    BAM_Cluster

    WLS_BAM3

    JDBC

    MFT_Cluster

    WLS_MFT3

    JDBC

  10. If the cluster that you are scaling out is configured for automatic service migration, update the JTA Migration Policy to the required value.
    1. Go to Environment > Servers > WLS_XYZn > Configuration > Migration.
    2. Use Table 24-3 to set the recommended JTA Migration Policy depending on the cluster that you want to scale out.

      Table 24-3 The Recommended JTA Migration Policy for the Cluster to be Scaled Out

      Cluster to Scale Out New Server Name JTA Migration Policy

      WSM-PM_Cluster

      WLS_WSM3

      Manual

      SOA_Cluster

      WLS_SOA3

      Failure Recovery

      ESS_Cluster

      WLS_ESS3

      Manual

      OSB_Cluster

      WLS_OSB3

      Failure Recovery

      BAM_Cluster

      WLS_BAM3

      Failure Recovery

      MFT_Cluster

      WLS_MFT3

      Failure Recovery

    3. Click Save, and then click Activate Changes.
    4. For the rest of the servers already existing in the cluster, update the list of JTA candidate servers for JTA migration to include the new server.
      • Go to Environment > Servers > server > Configuration > Migration.

      • Go to JTA Candidate Servers: leave the list empty (leaving it empty because all server in the cluster are JTA candidate servers).

      • Click Save, and then click Activate Changes. Although you need to restart the servers for this change to be effective, you can do a unique restart later, after you complete all the required configuration changes.

  11. If the cluster you are scaling out is configured for automatic service migration, use the Oracle WebLogic Server Administration Console to update the automatically created WLS_XYZn (migratable) with the recommended migration policy, because by default it is set to Manual Service Migration Only.

    Use the following table for the list of migratable targets to update:

    Table 24-4 The Recommended Migratable Targets to Update

    Cluster to Scale Out Migratable Target to Update Migration Policy

    WSM-PM_Cluster

    NA

    NA

    SOA_Cluster

    WLS_SOA3 (migratable)

    Auto-Migrate Failure Recovery Services

    ESS_Cluster

    NA

    NA

    OSB_Cluster

    WLS_OSB3 (migratable)

    Auto-Migrate Failure Recovery Services

    BAM_Cluster

    WLS_BAM3 (migratable)

    Auto-Migrate Exactly-Once Services

    MFT_Cluster

    WLS_MFT3 (migratable)

    Auto-Migrate Failure Recovery Services

    1. Go to Environment > Cluster > Migratable Servers.
    2. Click Lock & Edit.
    3. Click WLS_XYZ3 (migratable).
    4. Go to the tab Configuration > Migration.
    5. Change the Service Migration Policy to the value listed in the table.
    6. Leave the Constrained Candidate Server list blank in case there are chosen servers. If no servers are selected, you can migrate this migratable target to any server in the cluster.
    7. Click Save, and then click Activate Changes.
  12. For components that use multiple migratable targets, in addition to step 11, we have to create another migratable target. BAM is used here as an example: use the Oracle WebLogic Server Administration Console to clone WLS_BAM3 (migratable) into a new migratable target.
    1. Go to Environment > Cluster > Migratable Servers.
    2. Click Lock & Edit.
    3. Click WLS_BAM3 (migratable) and click Clone.
    4. Name the new target WLS_BAM3_bam-exactly-once (migratable).
    5. Click the new migratable server.
    6. Go to the tab Configuration > Migration.
    7. If not set, change the Service Migration Policy to Auto-Migrate Exactly-Once Services.
    8. Leave the Constrained Candidate Server list blank. If no servers are selected, you can migrate this migratable target to any server in the cluster.
    9. Click Save, and then click Activate Changes.
  13. Update the Constrained Candidate Server list in the existing migratable servers in the cluster that you are scaling because by default they are pre-populated with only WLS_XYZ1 and WLS_XYZ2 servers.
    1. Go to each migratable server.
    2. Go to the tab Configuration > Migration > Constrained Candidate Server.

      You can leave the server list blank to make these migratable targets migrate to any server in this cluster, including the newly created managed server.

      Use the following table to identify the migratable servers that have to be updated:

      Table 24-5 The Existing Migratable Targets to Update

      Cluster to Scale Out Existing Migratable Target to Update Constrained Candidate Server

      WSM-PM_Cluster

      NA

      Leave empty

      SOA_Cluster

      WLS_SOA1 (migratable)

      WLS_SOA2 (migratable)

      Leave empty

      ESS_Cluster

      NA

      Leave empty

      OSB_Cluster

      WLS_OSB1 (migratable)

      WLS_OSB2 (migratable)

      Leave empty

      BAM_Cluster

      WLS_BAM1 (migratable)

      WLS_BAM2 (migratable)

      WLS_BAM1_bam-exactly-once (migratable)

      WLS_BAM2_bam-exactly-once (migratable)

      Leave empty

      MFT_Cluster

      WLS_MFT1 (migratable)

      WLS_MFT2 (migratable)

      Leave empty

    3. Click Save, and then click Activate Changes. Although you need to restart the servers for this change to be effective, you can do a unique restart later, after you complete all the required configuration changes.
  14. Create the required persistent stores for the JMS servers.
    1. Log in to WebLogic Console and go to Services > Persistent Stores.
    2. Click New and select Create JDBCStore.

    Use the following table to create the required persistent stores:

    Note:

    The number in names and prefixes in the existing resources were assigned automatically by the Configuration Wizard during the domain creation. For example:

    • UMSJMSJDBCStore_auto_1 — soa_1

    • UMSJMSJDBCStore_auto_2 — soa_2

    • BPMJMSJDBCStore_auto_1 — soa_3

    • BPMJMSJDBCStore_auto_2 — soa_4

    • SOAJMSJDBCStore_auto_1 — soa_5

    • SOAJMSJDBCStore_auto_2 — soa_6

    So review the existing prefixes and select a new and unique prefix and name for each new persistent store.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the scaled tag and are shown here as an example.

    Table 24-6 The New Resources Qualified with the Scaled Tag

    Cluster to Scale Out Persistent Store Prefix Name Data Source Target

    WSM-PM_Cluster

    NA

    NA

    NA

    NA

    SOA_Cluster

    UMSJMSJDBCStore_soa_scaled_3

    soaums_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    SOAJMSJDBCStore_ soa_scaled_3

    soajms_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    BPMJMSJDBCStore_ soa_scaled_3

    soabpm_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    (only when you use Insight) ProcMonJMSJDBCStore_soa_scaled_3

    soaprocmon_scaled_3

    WLSSchemaDataSource

    WLS_SOA3 (migratable)

    ESS_Cluster

    NA

    NA

    NA

    NA

    OSB_Cluster

    UMSJMSJDBCStore_osb_scaled_3

    osbums_scaled_3

    WLSSchemaDataSourc

    WLS_OSB3 (migratable)

    OSBJMSJDBCStore_osb_scaled_3

    osbjms_scaled_3

    WLSSchemaDataSourc

    WLS_OSB3 (migratable)

    (only when you use Insight) ProcMonJMSJDBCStore_osb_scaled_3

    osbprocmon_scaled_3

    WLSSchemaDataSource

    WLS_OSB3 (migratable)

    BAM_Cluster

    UMSJMSJDBCStore_bam_scaled_3

    bamums_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3_bam-exactly-once (migratable)

    BamPersistenceJmsJDBCStore_bam_scaled_3

    bamP_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3_bam-exactly-once (migratable)

    BamReportCacheJmsJDBCStore_bam_scaled_3

    bamR_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3_bam-exactly-once (migratable)

    BamAlertEngineJmsJDBCStore_bam_scaled_3

    bamA_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3_bam-exactly-once (migratable)

    BamJmsJDBCStore_bam_scaled_3

    bamjms_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3_bam-exactly-once (migratable)

    BamCQServiceJmsJDBCStore_bam_scaled_3

    bamC_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3*

    MFT_Cluster

    MFTJMSJDBCStore_mft_scaled_3

    mftjms_scaled_3

    WLSSchemaDataSourc

    WLS_MFT3 (migratable)

    Note:

    (*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets.
  15. Create the required JMS Servers for the new managed server.
    1. Go to WebLogic Console > Services > Messaging > JMS Servers.
    2. Click Lock & Edit.
    3. Click New.

    Use the following table to create the required JMS Servers. Assign to each JMS Server the previously created persistent stores:

    Note:

    The number in the names of the existing resources are assigned automatically by the Configuration Wizard during domain creation.

    So review the existing JMS server names and select a new and unique name for each new JMS server.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the product_scaled_N tag and are shown here as an example.

    Cluster to Scale Out JMS Server Name Persistent Store Target

    WSM-PM_Cluster

    NA

    NA

    NA

    SOA_Cluster

    UMSJMSServer_soa_scaled_3

    UMSJMSJDBCStore_soa_scaled_3

    WLS_SOA3 (migratable)

    SOAJMSServer_ soa_scaled_3

    SOAJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    BPMJMSServer_ soa_scaled_3

    BPMJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    (only when you use Insight) ProcMonJMSServer_soa_scaled_3

    ProcMonJMSJDBCStore_soa_scaled_3

    WLS_SOA3 (migratable)

    ESS_Cluster

    NA

    NA

    NA

    OSB_Cluster

    UMSJMSServer_osb_scaled_3

    UMSJMSJDBCStore_osb_scaled_3

    WLS_OSB3 (migratable)

    wlsbJMSServer_osb_scaled_3

    OSBJMSJDBCStore_osb_scaled_3

    WLS_OSB3 (migratable)

    (only when you use Insight) ProcMonJMSServer_osb_scaled_3

    ProcMonJMSJDBCStore_osb_scaled_3

    WLS_OSB3 (migratable)

    BAM_Cluster

    UMSJMSServer_bam_scaled_3

    UMSJMSJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamPersistenceJmsServer_bam_scaled_3

    BamPersistenceJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamReportCacheJmsServer_bam_scaled_3

    BamReportCacheJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamAlertEngineJmsServer_bam_scaled_3

    BamAlertEngineJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BAMJMSServer_bam_scaled_3

    BamJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamCQServiceJmsServer_bam_scaled_3

    BamCQServiceJmsJDBCStore_bam_scaled_3

    WLS_BAM3*

    MFT_Cluster

    MFTJMSServer_mft_scaled_3

    MFTJMSJDBCStore_mft_scaled_3

    WLS_MFT3 (migratable)

    Note:

    (*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets.

  16. Update the SubDeployment Targets for JMS Modules (if applicable) to include the recently created JMS servers.
    1. Expand the Services > Messaging > JMS Modules.
    2. Click the JMS module. For example: BPMJMSModule.

      Use the following table to identify the JMS modules to update, depending on the cluster that you are scaling out:

      Table 24-7 The JMS Modules to Update

      Cluster to Scale Out JMS Module to Update JMS Server to Add to the Subdeployment

      WSM-PM_Cluster

      NA

      NA

      SOA_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_soa_scaled_3

      SOAJMSModule

      SOAJMSServer_soa_scaled_3

      BPMJMSModule

      BPMJMSServer_soa_scaled_3

      (Only if you have configured Insight) ProcMonJMSModule *

      ProcMonJMSServer_soa_scaled_3

      ESS_Cluster

      NA

      NA

      OSB_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_osb_scaled_3

      jmsResources (scope Global)

      wlsbJMSServer_osb_scaled_3

      (Only if you have configured Insight) ProcMonJMSModule *

      ProcMonJMSServer_osb_scaled_3

      BAM_Cluster

      BamPersistenceJmsSystemModule

      BamPersistenceJmsServer_bam_scaled_3

      BamReportCacheJmsSystemModule

      BamReportCacheJmsServer_bam_scaled_3

      BamAlertEngineJmsSystemModule

      BamAlertEngineJmsServer_bam_scaled_3

      BAMJMSSystemResource

      BAMJMSServer_bam_scaled_3

      BamCQServiceJmsSystemModule

      N/A (no subdeployment)

      UMSJMSSystemResource *

      UMSJMSServer_bam_scaled_3

      MFT_Cluster

      MFTJMSModule

      MFTJMSServer_mft_scaled_3

      (*) Some modules (UMSJMSystemResource, ProcMonJMSModule) may be targeted to more than one cluster. Ensure that you update the appropriate subdeployment in each case.
    3. Go to Configuration > Subdeployment.
    4. Add the corresponding JMS Server to the existing subdeployment.

      Note:

      The subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

    5. Click Save, and then click Activate Changes.
  17. In case you are scaling out a BAM cluster, you need to create some local queues for the new server in the BamCQServiceJmsSystemModule module. Follow these steps to create them:
    1. Go to WebLogic Console>Services>Messaging>JMS Modules.
    2. Click Lock & Edit.
    3. Click in BamCQServiceJmsSystemModule.
    4. Click Targets.
    5. Add WLS_BAM3 to the targets and click Save.
    6. Click New.
    7. Select Queue and click Next.
    8. Name it BamCQServiceAlertEngineQueue_auto_3 and click Next.
    9. Create a new Subdeployment with the target BamCQServiceJmsServer_bam_scaled_3 and select it for the queue.
    10. Click Finish.
    11. Click in the newly created queue BamCQServiceAlertEngineQueue_auto_3
    12. Go to Configuration>General>Advanced.
    13. Set Local JNDI Name to queue/oracle.beam.cqservice.mdbs.alertengine.
    14. Click Save.
    15. Repeat these steps to create the other queue BamCQServiceReportCacheQueue_auto_3 with the information in Table 24-8.
    16. After you finish, you have these new local queues.

      Table 24-8 Information to Create the Local Queues

      Name Type Local JNDI Name Subdeployment

      BamCQServiceAlertEngineQueue_auto_3

      Queue

      queue/oracle.beam.cqservice.mdbs.alertengine

      BamCQServiceJmsServer_auto_3

      BamCQServiceReportCacheQueue_auto_3

      Queue

      queue/oracle.beam.cqservice.mdbs.reportcache

      BamCQServiceJmsServer_auto_3

    17. Click Activate Changes.
  18. Restart all servers (except the newly created server) for the previous changes to be effective. You can restart in a rolling manner to eliminate downtime.
  19. The configuration is finished. Now sign in to SOAHOST1 and run the pack command to create a template pack, as follows:
    cd ORACLE_COMMON_HOME/common/bin
    ./pack.sh -managed=true 
              -domain=ASERVER_HOME
              -template=/full_path/scaleout_domain.jar
              -template_name=scaleout_domain_template
              -log_priority=DEBUG -log=/tmp/pack.log  

    In this example:

    • Replace ASERVER_HOME with the actual path to the domain directory that you created on the shared storage device.

    • Replace full_path with the complete path to the location where you want to create the domain template jar file. You need to reference this location when you copy or unpack the domain template jar file. Oracle recommends that you choose a shared volume other than ORACLE_HOME, or write to /tmp/ and copy the files manually between servers.

      You must specify a full path for the template jar file as part of the -template argument to the pack command:
      SHARED_CONFIG_DIR/domains/template_filename.jar
    • scaleout_domain.jar is a sample name for the jar file that you are creating, which contains the domain configuration files.

    • scaleout_domain_template is the label that is assigned to the template data stored in the template file.

  20. Run the unpack command on SOAHOSTN to unpack the template in the managed server domain directory, as follows:
    cd ORACLE_COMMON_HOME/common/bin
    ./unpack.sh -domain=MSERVER_HOME
                -overwrite_domain=true
                -template=/full_path/scaleout_domain.jar
                -log_priority=DEBUG
                -log=/tmp/unpack.log
                -app_dir=APPLICATION_HOME

    In this example:

    • Replace MSERVER_HOME with the complete path to the domain home to be created on the local storage disk. This is the location where the copy of the domain is unpacked.

    • Replace /full_path/scaleout_domain.jar with the complete path and file name of the domain template jar file that you created when you ran the pack command to pack up the domain on the shared storage device

    • Replace APPLICATION_HOME with the complete path to the Application directory for the domain on shared storage. See File System and Directory Variables Used in This Guide.

  21. When scaling out the SOA_Cluster:
    1. If BPM Web Forms are used, update the startWebLogic.sh customizations for BPM to include the new node, as explained in Updating SOA BPM Servers for Web Forms.
    2. Update the setDomain.sh to include appTrustKeyStore.jks, as explained in Adding the Updated Trust Store to the Oracle WebLogic Server Start Scripts.
  22. When scaling out OSB_Cluster:
    1. Restart the Admin Server to see the new server in the Service Bus Dashboard.
  23. When scaling out MFT_Cluster:
    1. Default SFTP/FTP ports are used in the new server. If you are not using the defaults, configure the ports in the SFTP server as described in Configuring the SFTP Ports to configure the ports in the SFTP server.
  24. Start Node Manager on the new host.
    cd $NM_HOME
    nohup ./startNodeManager.sh > ./nodemanager.out 2>&1 &
  25. Start the new managed server.
  26. Update the web tier configuration to include the new server:
    1. If you are using OTD, log in to Enterprise Manager and update the corresponding origin pool, as explained in Creating the Required Origin Server Pools to add the new server to the pool.
    2. If you are using OHS, there is no need to add the new server to OHS. By default, the Dynamic Server List is used, which means that the list of servers in the cluster is automatically updated when a new node becomes part of the cluster. So, adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee the initial contact in case of a partial outage.

      If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server is up, update the WebLogicCluster directive to include the new server.

      For example:

      <Location /osb>
       WLSRequest ON
       WebLogicCluster SOAHOST1:8011,SOAHOST2:8011,SOAHOST3:8011
       WLProxySSL ON
       WLProxySSLPassThrough ON
      </Location>
Verifying the Scale Out of Static Clusters
After scaling out and starting the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      soa.example.com/soa-infra
    2. Check that there is activity in the new server also:
      Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
    3. You can also verify that the web sessions are created in the new server:
      • Go to Cluster > Deployments.

      • Expand soa-infra, click soa-infra Web application.

      • Go to Monitoring to check the web sessions in each server.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      WSM-PM_Cluster

      http://soainternal.example.com/wsm-pm

      wsm-pm > wsm-pm

      SOA_Cluster

      https://soa.example.com/soa-infra

      soa-infra > soa-infra

      ESS_Cluster

      https://soa.example.com/ESSHealthCheck

      ESSHealthCheck

      OSB_Cluster

      https://osb.example.com/sbinspection.wsil

      Service Bus WSIL

      MFT_Cluster

      https://mft.example.com/mftconsole

      mftconsole

      BAM_Cluster

      https://soa.example.com/bam/composer

      BamComposer > /bam/composer

  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. Go to JMS Servers.
    2. Click JMS Server > Monitoring.
  3. Verify the service migration, as described in Validating Automatic Service Migration in Static Clusters.

Scaling Out the Topology for Dynamic Clusters

This section lists the prerequisites, explains the procedure to scale out the topology with dynamic clusters, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink).

Prerequisites for Scaling Out

Before you perform a scale out of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with managed servers already running.

  • The new node can access the existing home directories for WebLogic Server and SOA. Use the existing installations in shared storage. You do not need to install WebLogic Server or SOA binaries in a new location. However, you do need to run pack and unpack commands to bootstrap the domain configuration in the new node.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Out a Dynamic Cluster
The steps provided in this procedure use the SOA EDG topology as a reference. Initially there are two application tier hosts (SOAHOST1 and SOAHOST2), each running one managed server of each cluster. A new host SOAHOST3 is added to scale up the clusters with a third managed server. WLS_XYZn is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names are WLS_SOA3, WLS_OSB3, WLS_ESS3, and so on.

To scale out the topology in a dynamic cluster, complete the following steps:

  1. On the new node, mount the existing shared volumes for FMW Home (NFS Volume1), shared config (NFS Volume 3), and runtime (NFS Volume 4), as described in Table 7-4.
  2. Locate the inventory in the shared directory (for example, /u01/oracle/products/oraInventory), per Oracle’s recommendation. So you do not need to attach any home, but you may want to execute the script: /u01/oracle/products/oraInventory/createCentralInventory.sh.

    This command creates and updates the local file /etc/oraInst.loc in the new node to point it to the oraInventory location.

    If there are other inventory locations in the new host, you can still use them, but /etc/oraInst.loc file must be updated accordingly for updates in each case.

  3. Update the /etc/hosts files to add the alias SOAHOSTN for the new node, as described in Verifying IP Addresses and Host Names in DNS or Hosts File.

    For example:

    10.229.188.204 host1-vip.example.com host1-vip ADMINVHN
    10.229.188.205 host1.example.com host1 SOAHOST1
    10.229.188.206 host2.example.com host2 SOAHOST2
    10.229.188.207 host3.example.com host3 WEBHOST1
    10.229.188.208 host4.example.com host4 WEBHOST2
    10.229.188.209 host5.example.com host5 SOAHOST3 
  4. Configure a per host Node Manager in the new node, as described in Creating a Per Host Node Manager Configuration.
  5. Log in to the Oracle WebLogic Administration Console to create a new machine for the new node.
  6. Update the machine's Node Manager address to map the IP of the node that is being used for scale out.
  7. Use the Oracle WebLogic Server Administration Console to increase the dynamic cluster to include a new managed server:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Clusters.
    3. Select the cluster to want to scale out.
    4. Go to Configuration > Servers.
    5. Set Dynamic Cluster Size to 3. By default, the cluster size is 2.

      Note:

      In case of scaling-out to more than three servers, we also need to update Number of servers in cluster Address that is 3 by default. Although Oracle recommends you to use the cluster syntax for t3 calls, the cluster address is used if calling from external elements via t3, for EJB stubs, and so on.

  8. Sign in to SOAHOST1 and run the pack command to create a template pack as follows:
    cd ORACLE_COMMON_HOME/common/bin
    ./pack.sh -managed=true
              -domain=ASERVER_HOME
              -template=/full_path/scaleout_domain.jar
              -template_name=scaleout_domain_template
              -log_priority=DEBUG -log=/tmp/pack.log  

    In this example:

    • Replace ASERVER_HOME with the actual path to the domain directory that you created on the shared storage device.

    • Replace full_path with the complete path to the location where you want to create the domain template jar file. You need to reference this location when you copy or unpack the domain template jar file. Oracle recommends that you choose a shared volume other than ORACLE_HOME, or write to /tmp/ and copy the files manually between servers.

      You must specify a full path for the template jar file as part of the -template argument to the pack command:
      SHARED_CONFIG_DIR/domains/template_filename.jar
    • scaleout_domain.jar is a sample name for the jar file that you are creating, which contains the domain configuration files.

    • scaleout_domain_template is the label that is assigned to the template data stored in the template file.

  9. Run the unpack command on SOAHOSTN to unpack the template in the managed server domain directory, as follows:
    cd ORACLE_COMMON_HOME/common/bin
    ./unpack.sh -domain=MSERVER_HOME
                -overwrite_domain=true
                -template=/full_path/scaleout_domain.jar
                -log_priority=DEBUG
                -log=/tmp/unpack.log
                -app_dir=APPLICATION_HOME

    In this example:

    • Replace MSERVER_HOME with the complete path to the domain home to be created on the local storage disk. This is the location where the copy of the domain is unpacked.

    • Replace /full_path/scaleout_domain.jar with the complete path and file name of the domain template jar file that you created when you ran the pack command to pack up the domain on the shared storage device

    • Replace APPLICATION_HOME with the complete path to the Application directory for the domain on shared storage. See File System and Directory Variables Used in This Guide.

  10. When scaling out the SOA_Cluster:
    1. If BPM Web Forms are used, update the startWebLogic.sh customizations for BPM to include the new node, as explained in Updating SOA BPM Servers for Web Forms.
    2. Update the setDomain.sh to include appTrustKeyStore.jks, as explained in Adding the Updated Trust Store to the Oracle WebLogic Server Start Scripts.
  11. When scaling out OSB_Cluster:
    1. Restart Admin Server to see the new server in the Service Bus Dashboard.
  12. When scaling out MFT_Cluster:
    1. Default SFTP/FTP ports will be used in the new server. If you are not using the defaults, follow the steps described in Configuring the SFTP Ports to configure the ports in the SFTP server.
  13. Start Node Manager on the new host.
    cd $NM_HOME
    nohup ./startNodeManager.sh > ./nodemanager.out 2>&1 &
  14. Start the new managed Server.
  15. Update the web tier configuration to include this new server:
    1. If using OTD, log in to Enterprise Manager and update the corresponding origin pool, as explained in Creating the Required Origin Server Pools to add the new server to the pool.
    2. If using OHS, there is no need to add the new server to OHS.

      By default, the Dynamic Server list is used, which means that the list of servers in the cluster is automatically updated when a new node becomes part of the cluster. So adding the new node to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee the initial contact in the case of a partial outage.

      If there expected scenarios where the Oracle HTTP Server is restarted and only the new server would be up, update the WebLogicCluster directive to include the new server.

      For example:

      <Location /osb>
       WLSRequest ON
       WebLogicCluster SOAHOST1:8011,SOAHOST2:8012,SOAHOST3:8013
       WLProxySSL ON
       WLProxySSLPassThrough ON
      </Location>
Verifying the Scale Out of Dynamic Clusters
After scaling out and starting the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      soa.example.com/soa-infra
    2. Check that there is activity in the new server also:
      Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
    3. You can also verify that the web sessions are created in the new server:
      • Go to Cluster > Deployments.

      • Expand soa-infra, click soa-infra Web application.

      • Go to Monitoring to check the web sessions in each server.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      WSM-PM_Cluster

      http://soainternal.example.com/wsm-pm

      wsm-pm > wsm-pm

      SOA_Cluster

      https://soa.example.com/soa-infra

      soa-infra > soa-infra

      ESS_Cluster

      https://soa.example.com/ESSHealthCheck

      ESSHealthCheck

      OSB_Cluster

      https://osb.example.com/sbinspection.wsil

      Service Bus WSIL

      MFT_Cluster

      https://mft.example.com/mftconsole

      mftconsole

      BAM_Cluster

      https://soa.example.com/bam/composer

      BamComposer > /bam/composer

  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. Go to JMS Servers.
    2. Click JMS Server > Monitoring.
  3. Verify the service migration, as described in Validating Automatic Service Migration in Dynamic Clusters.

Scaling in the Topology

When you scale in the topology, you remove managed servers that were added to new hosts.

Scaling in the Topology for Static Clusters

To scale in the topology for a static cluster:
  1. To scale in the cluster without any JMS data loss, perform the steps described in Managing the JMS Messages in a SOA Server:

    After you complete the steps, continue with the scale-in procedure.

  2. Check the pending JTA. Before you shut down the server, review if there are any active JTA transactions in the server that you want to delete. Navigate to the WebLogic Console and click Environment > Servers > <server name> > Monitoring > JTA > Transactions

    Note:

    If you have used the Shutdown Recovery policy for JTA, the transactions are recovered in another server after you shut down the server.

  3. Shut down the server by using the When works completes option.

    Note:

    This operation can take long time if there are active HTTP sessions or long transactions in the server. For more information about graceful shutdown, see Using Server Life Cycle Commands in Administering Server Startup and Shutdown for Oracle WebLogic Server

  4. Use the Oracle WebLogic Server Administration Console to delete the migratable target that is used by the server that you want to delete.
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Cluster > Migratable Target.
    3. Select the migratable target that you want to delete.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  5. Use the Oracle WebLogic Server Administration Console to delete the new server:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Servers.
    3. Select the server that you want to delete.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.

    Note:

    If migratable target was not deleted in the previous step, you get the following error message:

    The following failures occurred: --MigratableTargetMBean WLS_SOA3_soa-failure-recovery (migratable) does not have a preferred server set.
    Errors must be corrected before proceeding.
  6. Use the Oracle WebLogic Server Administration Console to update the subdeployment of each JMS Module that is used by the cluster that you are shrinking.

    Use the following table to identify the module for each cluster and perform this action for each module:

    Cluster to Scale in Persistent Store JMS Server to Delete from the Subdeployment

    WSM-PM_Cluster

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSSystemResource

    SOAJMSModule

    BPMJMSModule

    UMSJMSServer_soa_scaled_3

    SOAJMSServer_soa_scaled_3

    BPMJMSServer_soa_scaled_3

    ESS_Cluster

    Not applicable

    Not applicable

    OSB_Cluster

    UMSJMSSystemResource

    jmsResources (scope Global)

    UMSJMSServer_osb_scaled_3

    wlsbJMSServer_osb_scaled_3

    BAM_Cluster

    BamPersistenceJmsSystemModule

    BamReportCacheJmsSystemModule

    BamAlertEngineJmsSystemModule

    BAMJMSSystemResource

    BamCQServiceJmsSystemModule

    BamPersistenceJmsServer_bam_scaled_3

    BamReportCacheJmsServer_bam_scaled_3

    BamAlertEngineJmsServer_bam_scaled_3

    BAMJMSServer_bam_scaled_3

    Not applicable (no subdeployment)

    MFT_Cluster

    MFTJMSModule

    MFTJMSServer_mft_scaled_3

    1. Click Lock & Edit.
    2. Go to Domain > Services > Messaging > JMS Modules.
    3. Click the JMS module.
    4. Click subdeployment.
    5. Unselect the JMS server that was created for the deleted server.
    6. Click Save.
    7. Click Activate Changes.
  7. In case you want to scale in a BAM cluster, use the Oracle WebLogic Server Administration Console to delete the local queues that are created for the new server:
    1. Click Lock & Edit.
    2. Go to WebLogic Console>Services>Messaging> JMS Modules.
    3. Click in BamCQServiceJmsSystemModule.
    4. Delete the local queues that are created for the new server:
      • BamCQServiceAlertEngineQueue_auto_3

      • BamCQServiceReportCacheQueue_auto_3

    5. Click Activate Changes.
  8. Use the Oracle WebLogic Server Administration Console to delete the JMS servers:
    1. Click Lock & Edit.
    2. Go to Domain > Services > Messaging > JMS Servers.
    3. Select the JMS Servers that you created for the new server.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  9. Use the Oracle WebLogic Server Administration Console to delete the JMS persistent stores:
    1. Click Lock & Edit.
    2. Go to Domain > Services > Persistent Stores.
    3. Select the Persistent Stores that you created for the new server.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  10. Update the web tier configuration to remove references to the new server.

Scaling in the Topology for Dynamic Clusters

To scale in the topology for a dynamic cluster:
  1. To scale in the cluster without any JMS data loss, perform the steps described in Managing the JMS Messages in a SOA Server:

    After you complete the steps, continue with the scale-in procedure.

  2. Check the pending JTA. Before you shut down the server, review if there are any active JTA transactions in the server that you want to delete. Navigate to the WebLogic Console and click Environment > Servers > <server name> > Monitoring > JTA > Transactions.

    Note:

    If you have used the Shutdown Recovery policy for JTA, the transactions are recovered in another server after you shut down the server.

  3. Shut down the server by using the When works completes option.

    Note:

    • This operation can take long time if there are active HTTP sessions or long transactions in the server. For more information about graceful shutdown, see Using Server Life Cycle Commands in Administering Server Startup and Shutdown for Oracle WebLogic Server

    • In Dynamic Clusters, the JMS servers that are running in the server that you want to delete, and use “Always” as the migration policy, are migrated to another member in the cluster at this point (its server was just shutdown). The next time you restart the member that hosts them, these JMS servers will not start because their preferred server is not present in the cluster anymore. But you must check if they get any new messages during this interim period because the messages could be lost. To preserve the messages, pause the production and export the messages from these JMS servers before you restart any server in the cluster.

  4. Use the Oracle WebLogic Server Administration Console to reduce the dynamic cluster:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Clusters.
    3. Select the cluster that you want to scale in.
    4. Go to Configuration > Servers.
    5. Set the Dynamic Cluster size to 2.
  5. If you are using OSB, restart the Admin Server.

Scaling Up the Topology

When you scale up the topology, you add new managed servers to the existing hosts.

This section describes the procedures to scale up the topology with static and dynamic clusters.

Scaling Up the Topology for Static Clusters

This section lists the prerequisites, explains the procedure to scale up the topology with static clusters, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink).

You already have a node that runs a managed server that is configured with Fusion Middleware components. The node contains a WebLogic Server home and an Oracle Fusion Middleware SOA home in shared storage. Use these existing installations and domain directories, to create the new managed servers. You do not need to install WLS or SOA binaries or to run pack and unpack because the new server is going to run in the existing node.

Prerequisites for Scaling Up

Before you perform a scale up of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with managed servers already running.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Up a Static Cluster

Use the SOA EDG topology as a reference, with two application tier hosts (SOAHOST1 and SOAHOST2), each running one managed server of each cluster. The example explains how to add a third managed server to the cluster that runs in SOAHOST1. WLS_XYZn is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names are WLS_SOA3, WLS_OSB3, WLS_ESS3, and so on.

To scale up the cluster, complete the following steps:

  1. Use the Oracle WebLogic Server Administration Console to clone the first managed server in the cluster into a new managed server.
    1. In the Change Center section, click Lock & Edit.
    2. Go to Environment > Servers.
    3. Select the first managed server in the cluster to scale up and click Clone.
    4. Use Table 24-9 to set the correspondent name, listen address, and listen port depending on the cluster that you want to scale out. Note that the default listen port is increment by 1 to avoid binding conflicts with the managed server that is already created and running in the same host.
    5. Click the new managed server, and then select Configuration > General.
    6. Click Save, and then click Activate Changes.

    Table 24-9 List of Clusters that You Want to Scale Up

    Cluster to Scale Up Server to Clone New Server Name Server Listen Address Server Listen Port

    WSM-PM_Cluster

    WLS_WSM1

    WLS_WSM3

    SOAHOST1

    7011

    SOA_Cluster

    WLS_SOA1

    WLS_SOA3

    SOAHOST1

    8002

    ESS_Cluster

    WLS_ESS1

    WLS_ESS3

    SOAHOST1

    8022

    OSB_Cluster

    WLS_OSB1

    WLS_OSB3

    SOAHOST1

    8012

    BAM_Cluster

    WLS_BAM1

    WLS_BAM3

    SOAHOST1

    9002

    MFT_Cluster

    WLS_MFT1

    WLS_MFT3

    MFTHOST1

    7501

  2. Update the deployment Staging Directory Name of the new server, as described in Modifying the Upload and Stage Directories to an Absolute Path.
  3. Create a new key certificate and update the private key alias of the server, as described in Enabling SSL Communication Between the Middle Tier and the Hardware Load Balancer.
  4. By default, the cloned server uses default store for TLOGs. If the rest of the servers in the cluster that you are scaling-out are using TLOGs in JDBC persistent store, update the TLOG persistent store of the new managed server:

    Use the following table to identify the clusters that use JDBC TLOGs by default:

    Table 24-10 The Name of Clusters that Use JDBC TLOGs by Default

    Cluster to Scale Up New Server Name TLOG Persistent Store

    WSM-PM_Cluster

    WLS_WSM3

    Default (file)

    SOA_Cluster

    WLS_SOA3

    JDBC

    ESS_Cluster

    WLS_ESS3

    Default (file)

    OSB_Cluster

    WLS_OSB3

    JDBC

    BAM_Cluster

    WLS_BAM3

    JDBC

    MFT_Cluster

    WLS_MFT3

    JDBC

    Complete the following steps
    1. Go to Environment > Servers > WLS_XYZn > Configuration > Services.
    2. Expand Advanced.
    3. Change Transaction Log Store to JDBC.
    4. Change Data Source to WLSSchemaDatasource.
    5. Click Save, and then click Activate Changes.
  5. If the cluster you are scaling up is configured for automatic service migration, update the JTA Migration Policy to the required value.

    Use the following table to identify the clusters for which you have to update the JTA Migration Policy:

    Table 24-11 The Recommended JTA Migration Policy for the Cluster to be Scaled Up

    Cluster to Scale Up New Server Name JTA Migration Policy

    WSM-PM_Cluster

    WLS_WSM3

    Manual

    SOA_Cluster

    WLS_SOA3

    Failure Recovery

    ESS_Cluster

    WLS_ESS3

    Manual

    OSB_Cluster

    WLS_OSB3

    Failure Recovery

    BAM_Cluster

    WLS_BAM3

    Failure Recovery

    MFT_Cluster

    WLS_MFT3

    Failure Recovery

    Complete the following steps:

    1. Go to Environment > Servers > WLS_XYZn > Configuration > Migration.
    2. Use Table 24-11 to set the recommended JTA Migration Policy depending on the cluster that you want to scale out.
    3. Click Save, and then click Activate Changes.
    4. For the rest of the servers already existing in the cluster, update the list of JTA candidate servers for JTA migration to include the new server.
      • Go to Environment > Servers > server > Configuration > Migration.

      • Go to JTA Candidate Servers: leave the list empty (leave it empty because all servers in the cluster are JTA candidate servers).

      • Click Save, and then click Activate Changes. Although you need to restart the servers for this change to be effective, you can do a unique restart later, after you complete all the required configuration changes.

  6. If the cluster you are scaling up is configured for automatic service migration, use the Oracle WebLogic Server Administration Console to update the automatically created WLS_XYZn (migratable) with the recommended migration policy, because by default it is set to Manual Service Migration Only.

    Use the following table for the list of migratable targets to update:

    Table 24-12 The Recommended Migratable Targets to Update

    Cluster to Scale Up Migratable Target to Update Migration Policy

    WSM-PM_Cluster

    Not applicable

    Not applicable

    SOA_Cluster

    WLS_SOA3 (migratable)

    Auto-Migrate Failure Recovery Services

    ESS_Cluster

    Not applicable

    Not applicable

    OSB_Cluster

    WLS_OSB3 (migratable)

    Auto-Migrate Failure Recovery Services

    BAM_Cluster

    WLS_BAM3 (migratable)

    Auto-Migrate Exactly-Once Services

    MFT_Cluster

    WLS_MFT3 (migratable)

    Auto-Migrate Failure Recovery Services

    1. Go to Environment > Cluster > Migratable Servers.
    2. Click Lock and Edit.
    3. Click WLS_XYZ3 (migratable).
    4. Go to the Configuration tab and then Migration.
    5. Change the Service Migration Policy to the value listed in the table.
    6. Leave the Constrained Candidate Server list blank in case there are chosen servers. If no servers are selected, you can migrate this migratable target to any server in the cluster.
    7. Click Save, and then click Activate Changes.
  7. For components that use multiple migratable targets, such as BAM, in addition to step 6, create another migratable target. BAM is used here as an example: use the Oracle WebLogic Server Administration Console to clone WLS_BAM3 (migratable) into a new migratable target.
    1. Go to Environment > Cluster > Migratable Servers.
    2. Click Lock and Edit.
    3. Click WLS_BAM3 (migratable) and click Clone.
    4. Name the new target as WLS_BAM3_bam-exactly-once (migratable).
    5. Click the new migratable server.
    6. Go to the Configuration tab and select Migration.
    7. If not set, change the Service Migration Policy to Auto-Migrate Exactly-Once Services.
    8. Leave the Constrained Candidate Server list blank. If no servers are selected, you can migrate this migratable target to any server in the cluster.
    9. Click Save, and then click Activate Changes.
  8. Update the Constrained Candidate Server list in the existing migratable servers in the cluster that you are scaling because by default they are pre-populated with only WLS_XYZ1 and WLS_XYZ2 servers.

    Use the following table to identify the migratable servers that have to be updated:

    Table 24-13 The Existing Migratable Targets to Update

    Cluster to Scale Up Existing Migratable Target to Update Constrained Candidate Server

    WSM-PM_Cluster

    Not applicable

    Leave empty

    SOA_Cluster

    WLS_SOA1 (migratable)

    WLS_SOA2 (migratable)

    Leave empty

    ESS_Cluster

    Not applicable

    Leave empty

    OSB_Cluster

    WLS_OSB1 (migratable)

    WLS_OSB2 (migratable)

    Leave empty

    BAM_Cluster

    WLS_BAM1 (migratable)

    WLS_BAM2 (migratable)

    WLS_BAM1_bam-exactly-once (migratable)

    WLS_BAM2_bam-exactly-once (migratable)

    Leave empty

    MFT_Cluster

    WLS_MFT1 (migratable)

    WLS_MFT2 (migratable)

    Leave empty

    1. Go to each migratable server.
    2. Go to the tab Configuration > Migration > Constrained Candidate Server.

      You can leave the server list blank to make these migratable targets migrate to any server in this cluster, including the newly created managed server.

    3. Click Save and Activate Changes. Although you need to restart the servers for this change to be effective, you can do a unique restart later, after you complete all the required configuration changes.
  9. Create the required persistent stores for the JMS servers.
    1. Sign in to WebLogic Console and go to Services > Persistent Stores.
    2. Click New and select Create JDBCStore.

    Use the following table to create the required persistent stores:

    Note:

    The number in the names and prefixes in the existing resources were assigned automatically by the Configuration Wizard during the domain creation.

    For example:
    UMSJMSJDBCStore_auto_1 — soa_1
    UMSJMSJDBCStore_auto_2 — soa_2
    BPMJMSJDBCStore_auto_1 — soa_3
    BPMJMSJDBCStore_auto_2 — soa_4
    SOAJMSJDBCStore_auto_1 — soa_5
    SOAJMSJDBCStore_auto_2 — soa_6

    Review the existing prefixes and select a new and unique prefix and name for each new persistent store.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the scaled tag and are shown here as an example.

    Table 24-14 The New Resources Qualified with the Scaled Tag

    Cluster to Scale Up Persistent Store Prefix Name Data Source Target

    WSM-PM_Cluster

    Not applicable

    Not applicable

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSJDBCStore_soa_scaled_3

    soaums_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    SOAJMSJDBCStore_ soa_scaled_3

    soajms_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    BPMJMSJDBCStore_ soa_scaled_3

    soabpm_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    (only when you use Insight) ProcMonJMSJDBCStore_soa_scaled_3

    soaprocmon_scaled_3

    WLSSchemaDataSource

    WLS_SOA3 (migratable)

    ESS_Cluster

    Not applicable

    Not applicable

    Not applicable

    Not applicable

    OSB_Cluster

    UMSJMSJDBCStore_osb_scaled_3

    osbums_scaled_3

    WLSSchemaDataSourc

    WLS_OSB3 (migratable)

    OSBJMSJDBCStore_osb_scaled_3

    osbjms_scaled_3

    WLSSchemaDataSourc

    WLS_OSB3 (migratable)

    (only when you use Insight) ProcMonJMSJDBCStore_osb_scaled_3

    osbprocmon_scaled_3

    WLSSchemaDataSource

    WLS_OSB3 (migratable)

    BAM_Cluster

    UMSJMSJDBCStore_bam_scaled_3

    bamums_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3_bam-exactly-once (migratable)

    BamPersistenceJmsJDBCStore_bam_scaled_3

    bamP_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3_bam-exactly-once (migratable)

    BamReportCacheJmsJDBCStore_bam_scaled_3

    bamR_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3_bam-exactly-once (migratable)

    BamAlertEngineJmsJDBCStore_bam_scaled_3

    bamA_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3_bam-exactly-once (migratable)

    BamJmsJDBCStore_bam_scaled_3

    bamjms_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3_bam-exactly-once (migratable)

    BamCQServiceJmsJDBCStore_bam_scaled_3

    bamC_scaled_3

    WLSSchemaDataSourc

    WLS_BAM3*

    MFT_Cluster

    MFTJMSJDBCStore_mft_scaled_3

    mftjms_scaled_3

    WLSSchemaDataSourc

    WLS_MFT3 (migratable)

    Note:

    (*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets.
  10. Create the required JMS Servers for the new managed server.
    1. Go to WebLogic Console > Services > Messaging > JMS Servers.
    2. Click Lock and Edit.
    3. Click New.

    Use the following table to create the required JMS Servers. Assign to each JMS Server the previously created persistent stores:

    Note:

    The number in the names of the existing resources are assigned automatically by the Configuration Wizard during domain creation. Review the existing JMS server names and select a new and unique name for each new JMS server. To avoid naming conflicts and simplify the configuration, new resources are qualified with the product_scaled_N tag and are shown here as an example.

    Cluster to Scale Up JMS Server Name Persistent Store Target

    WSM-PM_Cluster

    Not applicable

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSServer_soa_scaled_3

    UMSJMSJDBCStore_soa_scaled_3

    WLS_SOA3 (migratable)

    SOAJMSServer_ soa_scaled_3

    SOAJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    BPMJMSServer_ soa_scaled_3

    BPMJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    (only when you use Insight) ProcMonJMSServer_soa_scaled_3

    ProcMonJMSJDBCStore_soa_scaled_3

    WLS_SOA3 (migratable)

    ESS_Cluster

    Not applicable

    Not applicable

    Not applicable

    OSB_Cluster

    UMSJMSServer_osb_scaled_3

    UMSJMSJDBCStore_osb_scaled_3

    WLS_OSB3 (migratable)

    wlsbJMSServer_osb_scaled_3

    OSBJMSJDBCStore_osb_scaled_3

    WLS_OSB3 (migratable)

    (only when you use Insight) ProcMonJMSServer_osb_scaled_3

    ProcMonJMSJDBCStore_osb_scaled_3

    WLS_OSB3 (migratable)

    BAM_Cluster

    UMSJMSServer_bam_scaled_3

    UMSJMSJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamPersistenceJmsServer_bam_scaled_3

    BamPersistenceJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamReportCacheJmsServer_bam_scaled_3

    BamReportCacheJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamAlertEngineJmsServer_bam_scaled_3

    BamAlertEngineJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BAMJMSServer_bam_scaled_3

    BamJmsJDBCStore_bam_scaled_3

    WLS_BAM3_bam-exactly-once (migratable)

    BamCQServiceJmsServer_bam_scaled_3

    BamCQServiceJmsJDBCStore_bam_scaled_3

    WLS_BAM3*

    MFT_Cluster

    MFTJMSServer_mft_scaled_3

    MFTJMSJDBCStore_mft_scaled_3

    WLS_MFT3 (migratable)

    Note:

    (*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets.

  11. Update the SubDeployment Targets for JMS Modules (if applicable) to include the recently created JMS servers.
    1. Expand the Services>Messaging>JMS Modules.
    2. Click the JMS module. For example: BPMJMSModule.

      Use the following table to identify the JMS modules to update depending on the cluster that you are scaling up:

      Cluster to Scale-up JMS Module to Update JMS Server to Add to the Subdeployment

      WSM-PM_Cluster

      Not applicable

      Not applicable

      SOA_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_soa_scaled_3

      SOAJMSModule

      SOAJMSServer_soa_scaled_3

      BPMJMSModule

      BPMJMSServer_soa_scaled_3

      (Only if you have configured Insight) ProcMonJMSModule *

      ProcMonJMSServer_soa_scaled_3

      ESS_Cluster

      Not applicable

      Not applicable

      OSB_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_osb_scaled_3

      jmsResources (scope Global)

      wlsbJMSServer_osb_scaled_3

      (Only if you have configured Insight) ProcMonJMSModule *

      ProcMonJMSServer_osb_scaled_3

      BAM_Cluster

      BamPersistenceJmsSystemModule

      BamPersistenceJmsServer_bam_scaled_3

      BamReportCacheJmsSystemModule

      BamReportCacheJmsServer_bam_scaled_3

      BamAlertEngineJmsSystemModule

      BamAlertEngineJmsServer_bam_scaled_3

      BAMJMSSystemResource

      BAMJMSServer_bam_scaled_3

      BamCQServiceJmsSystemModule

      Not applicable (no subdeplyemnt)

      UMSJMSSystemResource *

      UMSJMSServer_bam_scaled_3 *

      MFT_Cluster

      MFTJMSModule

      MFTJMSServer_mft_scaled_3

      (*) Some modules (UMSJMSystemResource, ProcMonJMSModule) may be targeted to more than one cluster. Ensure that you update the appropriate subdeployment in each case.
    3. Go to Configuration > Subdeployment.
    4. Add the corresponding JMS Server to the existing subdeployment.

      Note:

      The Subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

    5. Click Save, and then click Activate Changes.
  12. In case you are scaling up a BAM cluster, you need to create some local queues for the new server in the BamCQServiceJmsSystemModule module. Follow these steps to create them:
    1. Go to WebLogic Console>Services>Messaging>JMS Modules.
    2. Click Lock & Edit.
    3. Click in BamCQServiceJmsSystemModule.
    4. Click Targets.
    5. Add WLS_BAM3 to the targets and click Save.
    6. Click New.
    7. Select Queue and click Next.
    8. Name it BamCQServiceAlertEngineQueue_auto_3, and click Next.
    9. Create a new Subdeployment with the target BamCQServiceJmsServer_bam_scaled_3 and select it for the queue.
    10. Click Finish.
    11. Click in the newly created queue BamCQServiceAlertEngineQueue_auto_3
    12. Go to Configuration>General>Advanced.
    13. Set Local JNDI Name to queue/oracle.beam.cqservice.mdbs.alertengine.
    14. Click Save.
    15. Repeat these steps to create the other queue BamCQServiceReportCacheQueue_auto_3 with the information in Table 24-8.
    16. After you finish, you have these new local queues. You have to create two local queues for the new server with the information in Table 24-8.

      Table 24-15 Information to Create the Local Queues

      Name Type Local JNDI Name Subdeployment

      BamCQServiceAlertEngineQueue_auto_3

      Queue

      queue/oracle.beam.cqservice.mdbs.alertengine

      BamCQServiceJmsServer_auto_3

      BamCQServiceReportCacheQueue_auto_3

      Queue

      queue/oracle.beam.cqservice.mdbs.reportcache

      BamCQServiceJmsServer_auto_3

    17. Click Activate Changes.
  13. Restart all servers (except the newly created server) for all the previous changes to be effective. You can restart in a rolling manner to eliminate downtime.
  14. Start the new managed server.
  15. When scaling up the MFT_Cluster:

    Default SFTP/FTP ports are used in the new server. If you are not using the defaults, follow the steps described in Configuring the SFTP Ports to configure the ports in the SFTP server . When scaling up, use different ports SFTP/FTP for the new server that do not conflict with the existing server in the same machine.

  16. Update the web tier configuration to include this new server:
    • If you are using OTD, login to Enterprise Manager and update the corresponding origin pool as explained in Creating the Required Origin Server Pools to add the new server to the pool.

    • If you are using OHS, there is no need to add the new server to OHS. By default Dynamic Server List is used, which means that the list of the servers in the cluster is automatically updated when a new node become part of the cluster, so adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee initial contact in case of a partial outage.

      If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server would be up, update the WebLogicCluster directive to include the new server.

      <Location /osb>
        WLSRequest ON
        WebLogicCluster SOAHOST1:8011,SOAHOST2:8012,SOAHOST3:8013
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
      
Verifying the Scale Up of Static Clusters
After scaling out and starting the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      soa.example.com/soa-infra
    2. Check that there is activity in the new server also:
      Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
    3. You can also verify that the web sessions are created in the new server:
      • Go to Cluster > Deployments.

      • Expand soa-infra, click soa-infra Web application.

      • Go to Monitoring to check the web sessions in each server.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      WSM-PM_Cluster

      http://soainternal.example.com/wsm-pm

      wsm-pm > wsm-pm

      SOA_Cluster

      https://soa.example.com/soa-infra

      soa-infra > soa-infra

      ESS_Cluster

      https://soa.example.com/ESSHealthCheck

      ESSHealthCheck

      OSB_Cluster

      https://osb.example.com/sbinspection.wsil

      Service Bus WSIL

      MFT_Cluster

      https://mft.example.com/mftconsole

      mftconsole

      BAM_Cluster

      https://soa.example.com/bam/composer

      BamComposer > /bam/composer

  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. Go to JMS Servers.
    2. Click JMS Server > Monitoring.
  3. Verify the service migration, as described in Validating Automatic Service Migration in Static Clusters.

Scaling Up the Topology for Dynamic Clusters

This section lists the prerequisites, explains the procedure to scale out the topology with dynamic clusters, describes the steps to verify the scale-up process, and finally the steps to scale down (shrink).

You already have a node that runs a managed server that is configured with Fusion Middleware components. The node contains a WebLogic Server home and an Oracle Fusion Middleware SOA home in shared storage. Use these existing installations and domain directories, to create the new managed servers. You do not need to install WLS or SOA binaries or to run pack and unpack commands, because the new server is going to run in the existing node.

Prerequisites for Scaling Up

Before performing a scale up of the topology, you must ensure that you meet the following prerequisites:

  • The starting point is a cluster with managed servers already running.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Up a Dynamic Cluster

Use the SOA EDG topology as a reference, with two application tier hosts (SOAHOST1 and SOAHOST2), each running one managed server of each cluster. The example explains how to add a third managed server to the cluster that runs in SOAHOST1. WLS_XYZn is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names will be WLS_SOA3, WLS_OSB3, WLS_ESS4, and so on.

To scale up the cluster, complete the following steps:

  1. In scale-up, there is no need of adding a new machine to the domain as the new server would be added to an existing machine.

    If the CalculatedMachineNames attribute is set to true, then the MachineNameMatchExpression attribute is used to select the set of machines used for the dynamic servers. Assignments are made by using a round-robin algorithm.

    This following table lists examples of machine assignments in a dynamic cluster.

    Table 24-16 Examples of machine assignments in a dynamic cluster

    Machines in Domain MachineNameMatchExpression Configuration Dynamic Server Machine Assignments
    SOAHOST1 ,SOAHOST2 SOAHOST*

    dyn-server-1: SOAHOST1

    dyn-server-2: SOAHOST2

    dyn-server-3: SOAHOST1

    dyn-server-4: SOAHOST2

    ...

    SOAHOST1 ,SOAHOST2, SOAHOST3 SOAHOST*

    dyn-server-1: SOAHOST1

    dyn-server-2: SOAHOST2

    dyn-server-3: SOAHOST3

    dyn-server-4: SOAHOST1

    ...

    See https://docs.oracle.com/middleware/1212/wls/CLUST/dynamic_clusters.htm#CLUST678.

  2. If you are using SOAHOST{$id} as listen address in the template, update the /etc/hosts files to add the alias SOAHOSTN for the new node as described in the Verifying IP Addresses and Host Names in DNS or Hosts File.

    The new server WLS_XYZn listens in SOAHOSTn. This alias must be resolved to the corresponding IP address of the system host where the new managed server runs. See Table 24-16.

    Example:
    10.229.188.204 host1-vip.example.com host1-vip ADMINVHN 
    10.229.188.205 host1.example.com host1 SOAHOST1 SOAHOST3
    10.229.188.206 host2.example.com host2 SOAHOST2	
    10.229.188.207 host3.example.com host3 WEBHOST1 
    10.229.188.208 host4.example.com host4 WEBHOST2
    

    If you are using the machine name macro ${machineName} in the listen address of the template, the new server WLS_xYZn listens in the address of SOAHOSTn machine. In this case, adding aliases to /etc/hosts file is not necessary when you scale up the dynamic cluster. See Configuring Listen Addresses in Dynamic Cluster Server Templates.

  3. Use the Oracle WebLogic Server Administration Console to increase the dynamic cluster to include a new managed server:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Clusters.
    3. Select the cluster to want to scale out.
    4. Go to Configuration > Servers.
    5. Set Dynamic Cluster Size to 3. By default, the cluster size is 2.
    6. Click Saveand then, click Activate Changes.

      Note:

      In case of scaling-out to more than three servers, we also need to update Number of servers in cluster Address that is 3 by default. Although, Oracle recommends that you use the cluster syntax for t3 calls, the cluster address is used if calling from external elements through t3, for EJB stubs, and so on.

  4. When scaling up the SOA_Cluster:

    If BPM Web Forms are used, update the startWebLogic.sh in MSERVER_HOME customizations for BPM to include the new node as described in Updating SOA BPM Servers for Web Forms.

  5. When scaling up the OSB_Cluster:
    Restart the Admin Server to view the new server in the Service Bus Dashboard.
  6. When scaling up the MFT_Cluster:
    Default SFTP/FTP ports are used in the new server. If you are not using the default values, follow the steps described in Configuring the SFTP Ports to configure the ports in the SFTP server, .

    When scaling up, use different ports SFTP/FTP for the new server that does not conflict with the existing server in the same machine.

  7. Update the web tier configuration to include this new server:
    • If you are using OTD, login to Enterprise Manager and update the corresponding origin pool as explained in Creating the Required Origin Server Pools to add the new server to the pool.

    • If you are using OHS, there is no need to add the new server to OHS. By default Dynamic Server List is used, which means that the list of the servers in the cluster is automatically updated when a new node become part of the cluster, so adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee initial contact in case of a partial outage.

      If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server would be up, update the WebLogicCluster directive to include the new server.

      For example:

      <Location /osb>
        WLSRequest ON
        WebLogicCluster SOAHOST1:8011,SOAHOST2:8012,SOAHOST3:8013
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
      
  8. Start the new managed server from the Oracle WebLogic Server.
  9. Verify that the newly created managed server is running.
Verifying the Scale Up of Dynamic Clusters
After you scale out and start the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      soa.example.com/soa-infra
    2. Check that there is activity in the new server also:
      Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
    3. You can also verify that the web sessions are created in the new server:
      • Go to Cluster > Deployments.

      • Expand soa-infra, click soa-infra Web application.

      • Go to Monitoring to check the web sessions in each server.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      WSM-PM_Cluster

      http://soainternal.example.com/wsm-pm

      wsm-pm > wsm-pm

      SOA_Cluster

      https://soa.example.com/soa-infra

      soa-infra > soa-infra

      ESS_Cluster

      https://soa.example.com/ESSHealthCheck

      ESSHealthCheck

      OSB_Cluster

      https://osb.example.com/sbinspection.wsil

      Service Bus WSIL

      MFT_Cluster

      https://mft.example.com/mftconsole

      mftconsole

      BAM_Cluster

      https://soa.example.com/bam/composer

      BamComposer > /bam/composer

  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. Go to JMS Servers.
    2. Click JMS Server > Monitoring.
  3. Verify the service migration, as described in Configuring Automatic Service Migration for Dynamic Clusters.

Scaling Down the Topology

When you scale down the topology, you remove the managed servers that were added to the existing hosts.

Scaling Down the Topology for Static Clusters

To scale down the topology for static clusters:
  1. To scale down the cluster without any JMS data loss, perform the steps described in Managing the JMS Messages in a SOA Server:

    After you complete the steps, continue with the scale-down procedure.

  2. Check the pending JTA. Before you shut down the server, review if there are any active JTA transactions in the server that you want to delete. Navigate to the WebLogic Console and click Environment > Servers > <server name> > Monitoring > JTA > Transactions.

    Note:

    If you have used the Shutdown Recovery policy for JTA, the transactions are recovered in another server after you shut down the server.

  3. Shut down the server by using the When works completes option.

    Note:

    This operation can take long time if there are active HTTP sessions or long transactions in the server. For more information about graceful shutdown, see Using Server Life Cycle Commands in Administering Server Startup and Shutdown for Oracle WebLogic Server

  4. Use the Oracle WebLogic Server Administration Console to delete the migratable target that is used by the server that you want to delete.
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Cluster > Migratable Target.
    3. Select the migratable target that you want to delete.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  5. Use the Oracle WebLogic Server Administration Console to delete the new server:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Servers.
    3. Select the server that you want to delete.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.

    Note:

    If migratable target was not deleted in the previous step, you get the following error message:

    The following failures occurred: --MigratableTargetMBean WLS_SOA3_soa-failure-recovery (migratable) does not have a preferred server set.
    Errors must be corrected before proceeding.
  6. Use the Oracle WebLogic Server Administration Console to update the subdeployment of each JMS Module that is used by the cluster you are shrinking.

    Use the following table to identify the module for each cluster and perform this action for each module:

    Cluster to Scale Down Persistent Store JMS Server to Delete from the Subdeployment

    WSM-PM_Cluster

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSSystemResource

    SOAJMSModule

    BPMJMSModule

    UMSJMSServer_soa_scaled_3

    SOAJMSServer_soa_scaled_3

    BPMJMSServer_soa_scaled_3

    ESS_Cluster

    Not applicable

    Not applicable

    OSB_Cluster

    UMSJMSSystemResource

    jmsResources (scope Global)

    UMSJMSServer_osb_scaled_3

    wlsbJMSServer_osb_scaled_3

    BAM_Cluster

    BamPersistenceJmsSystemModule

    BamReportCacheJmsSystemModule

    BamAlertEngineJmsSystemModule

    BAMJMSSystemResource

    BamCQServiceJmsSystemModule

    BamPersistenceJmsServer_bam_scaled_3

    BamReportCacheJmsServer_bam_scaled_3

    BamAlertEngineJmsServer_bam_scaled_3

    BAMJMSServer_bam_scaled_3

    Not applicable (no subdeployment)

    MFT_Cluster

    MFTJMSModule

    MFTJMSServer_mft_scaled_3

    1. Click Lock & Edit.
    2. Go to Domain > Services > Messaging > JMS Modules.
    3. Click the JMS module.
    4. Click subdeployment.
    5. Unselect the JMS server that was created for the deleted server.
    6. Click Save.
    7. Click Activate Changes.
  7. In case you are want to scale down a BAM cluster, use the Oracle WebLogic Server Administration Console to delete the local queues that are created for the new server:
    1. Click Lock & Edit.
    2. Go to WebLogic Console>Services>Messaging> JMS Modules.
    3. Click in BamCQServiceJmsSystemModule.
    4. Delete the local queues that are created for the new server:
      • BamCQServiceAlertEngineQueue_auto_3

      • BamCQServiceReportCacheQueue_auto_3

    5. Click Activate Changes.
  8. Use the Oracle WebLogic Server Administration Console to delete the JMS servers:
    1. Click Lock & Edit.
    2. Go to Domain > Services > Messaging > JMS Servers.
    3. Select the JMS Servers that you created for the new server.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  9. Use the Oracle WebLogic Server Administration Console to delete the JMS persistent stores:
    1. Click Lock & Edit.
    2. Go to Domain > Services > Persistent Stores.
    3. Select the Persistent Stores that you created for the new server.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  10. Update the Web tier configuration to remove references to the new server.

Scaling Down the Topology in a Dynamic Cluster

To scale down the topology in a dynamic cluster:
  1. To scale down the cluster without any JMS data loss, perform the steps described in Managing the JMS Messages in a SOA Server:

    After you complete the steps, continue with the scale-down procedure.

  2. Check the pending JTA. Before you shut down the server, review if there are any active JTA transactions in the server that you want to delete. Navigate to the WebLogic Console and click Environment > Servers > <server name> > Monitoring > JTA > Transactions.

    Note:

    If you have used the Shutdown Recovery policy for JTA, the transactions are recovered in another server after you shut down the server.

  3. Shut down the server by using the When works completes option.

    Note:

    • This operation can take long time if there are active HTTP sessions or long transactions in the server. For more information about graceful shutdown, see Using Server Life Cycle Commands in Administering Server Startup and Shutdown for Oracle WebLogic Server

    • In Dynamic Clusters, the JMS servers that are running in the server that you want to delete, and use “Always” as the migration policy, are migrated to another member in the cluster at this point (its server was just shutdown). The next time you restart the member that hosts them, these JMS servers will not start because their preferred server is not present in the cluster anymore. But you must check if they get any new messages during this interim period because the messages could be lost. To preserve the messages, pause the production and export the messages from these JMS servers before you restart any server in the cluster.

  4. Use the Oracle WebLogic Server Administration Console to reduce the dynamic cluster:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Clusters.
    3. Select the cluster to want to scale-down.
    4. Go to Configuration > Servers.
    5. Set again the Dynamic Cluster Size to 2.