23 Scaling Procedures for an Enterprise Deployment

The scaling procedures for an enterprise deployment include scale out, scale in, scale up, and scale down. During a scale-out operation, you add managed servers to new nodes. You can remove these managed servers by performing a scale in operation. During a scale-up operation, you add managed servers to existing hosts. You can remove these servers by performing a scale-down operation.

This chapter describes the procedures to scale out/in and scale up/down static and dynamic clusters.

Scaling Out the Topology

When you scale out the topology, you add new managed servers to new nodes.

This section describes the procedures to scale out the Identity Management topology with static and dynamic clusters.

Note:

The dynamic clusters are applicable only for the Governance Domain components like, Oracle SOA Suite and Oracle Identity Governance.

Scaling Out the Topology for Static Clusters

This section lists the prerequisites, explains the procedure to scale out the topology with static clusters, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink).

Prerequisites for Scaling Out

Before you perform a scale out of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with managed servers already running.

  • The new node can access the existing home directories for WebLogic Server and Governance. Use the existing installations in shared storage. You do not need to install WebLogic Server or IDM binaries in a new location. However, you do need to run pack and unpack commands to bootstrap the domain configuration in the new node.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Out a Static Cluster
The steps provided in this procedure use the IDM EDG topology as a reference. Initially there are two application tier hosts (OAMHOST1 and OAMHOST2, or OIMHOST1 and OIMHOST2), each running one managed server of each cluster. A new host HOST3 is added to scale up the clusters with a third managed server. WLS_XYZn is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names will be WLS_OAM3, WLS_AMA3, WLS_SOA3, and so on.

To scale out the cluster, complete the following steps:

  1. On the new node, mount the existing FMW Home, which should include the IDM installation and the domain directory. Ensure that the new node has access to this directory, similar to the rest of the nodes in the domain. Also, ensure that the FMW Home you mount is the one associated with the domain your are extending.
  2. Locate the inventory in the shared directory (for example, /u01/oracle/products/oraInventory), per Oracle’s recommendation. So you do not need to attach any home, but you may want to execute the script: /u01/oracle/products/oraInventory/createCentralInventory.sh.

    This command creates and updates the local file /etc/oraInst.loc in the new node to point it to the oraInventory location.

    If there are other inventory locations in the new host, you can use them, but /etc/oraInst.loc file must be updated accordingly for updates in each case.

  3. Update the /etc/hosts files to add the name of the new host (unless you are using DNS), as described in Verifying IP Addresses and Host Names in DNS or Hosts File. If you are using host aliases such as OIMHOST, then ensure that you add an entry for that host.

    For example:

    10.229.188.204 host1-vip.example.com host1-vip ADMINVHN
    10.229.188.205 host1.example.com host1 OIMHOST1
    10.229.188.206 host2.example.com host2 OIMHOST2
    10.229.188.207 host3.example.com host3 WEBHOST1
    10.229.188.208 host4.example.com host4 WEBHOST2
    10.229.188.209 host5.example.com host5 OIMHOST3 
  4. Log in to the Oracle WebLogic Administration Console to create a new machine:
    1. Go to Environment > Machines.
    2. Click New to create a new machine for the new node.
    3. Set Name to OIMHOSTn or OAMHOSTn.
    4. Set Machine OS to Linux.
    5. Click Next.
    6. Set Type to Plain.
    7. Set Listen Address to the new host name. For example, OIMHOST3.
    8. Click Finish, and then click Activate Changes.
  5. Use the Oracle WebLogic Server Administration Console to clone the first managed server in the cluster into a new managed server.
    1. In the Change Center section, click Lock & Edit.
    2. Go to Environment > Servers.
    3. Select the first managed server in the cluster to scale out and click Clone.
    4. Use Details of the Cluster to be Scaled Out to set the correspondent name, listen address, and listen port, depending on the cluster that you want to scale out.
    5. Click the new managed server, and then Configuration > General.
    6. Update the Machine from OIMHOST1 to OIMHOSTn.
    7. Click Save, and then click Activate Changes.

    Table 23-1 Details of the Cluster to be Scaled Out

    Cluster to Scale Out Server to Clone New Server Name Server Listen Address Server Listen Port

    WSM-PM_Cluster

    WLS_WSM1

    WLS_WSMn

    OIMHOSTn

    7010

    SOA_Cluster

    WLS_SOA1

    WLS_SOAn

    OIMHOSTn

    8001

    OIM_Cluster

    WLS_OIM1

    WLS_OIMn

    OIMHOSTn

    14000

    OAM_Cluster

    WLS_OAM1

    WLS_OAMn

    OAMHOSTn

    14100

    AMA_Cluster

    WLS_AMA1

    WLS_AMAn

    OAMHOSTn

    14150

  6. Update the deployment Staging Directory Name of the new server, as described in Modifying the Upload and Stage Directories to an Absolute Path.
  7. Create a new key certificate and update the private key alias of the server, as described in Enabling SSL Communication Between the Middle Tier and the Hardware Load Balancer.
  8. By default, the cloned server uses default store for TLOGs. If the rest of the servers in the cluster that you are scaling-out are using TLOGs in JDBC persistent store, update the TLOG persistent store of the new managed server:
    1. Go to Environment > Servers > WLS_XYZn > Configuration > Services.
    2. Change Transaction Log Store to JDBC.
    3. Change Data Source to WLSSchemaDatasource.
    4. Click Save, and then click Activate Changes.

    Use the following table to identify the clusters that use JDBC TLOGs by default:

    Table 23-2 The Name of Clusters that Use JDBC TLOGs by Default

    Cluster to Scale Out New Server Name TLOG Persistent Store

    WSM-PM_Cluster

    WLS_WSMn

    Default (file)

    SOA_Cluster

    WLS_SOAn

    JDBC

    OIM_Cluster

    WLS_OIMn

    Default JDBC

    OAM_Cluster

    WLS_OAMn

    Not Applicable

    AMA_Cluster

    WLS_AMAn

    Not Applicable

  9. If the cluster that you are scaling out is configured for automatic service migration, update the JTA Migration Policy to the required value.
    1. Go to Environment > Servers > WLS_XYZn > Configuration > Migration.
    2. Use scaling-procedures-enterprise-deployment.html#GUID-0F483475-83C5-4FFE-A5F2-0E9566DFC17E__GUID-888AED4E-E432-4DAA-AEFB-17E20927838B to set the recommended JTA Migration Policy depending on the cluster that you want to scale out.

      Table 23-3 The Recommended JTA Migration Policy for the Cluster to be Scaled Out

      Cluster to Scale Out New Server Name JTA Migration Policy

      WSM-PM_Cluster

      WLS_WSMn

      Manual

      SOA_Cluster

      WLS_SOAn

      Failure Recovery

      OAM_Cluster

      WLS_OAMn

      Not Applicable

      AMA_Cluster

      WLS_AMAn

      Not Applicable

      OIM_Cluster

      WLS_OIMn

      Failure Recovery

    3. Click Save, and then click Activate Changes.
    4. For the rest of the servers already existing in the cluster, update the list of JTA candidate servers for JTA migration to include the new server.
      • Go to Environment > Servers > server > Configuration > Migration.

      • Go to JTA Candidate Servers: leave the list empty (leaving it empty because all server in the cluster are JTA candidate servers).

      • Click Save, and then click Activate Changes. Although you need to restart the servers for this change to be effective, you can do a unique restart later, after you complete all the required configuration changes.

  10. If the cluster you are scaling out is configured for automatic service migration, use the Oracle WebLogic Server Administration Console to update the automatically created WLS_XYZn (migratable) with the recommended migration policy, because by default it is set to Manual Service Migration Only.

    Use the following table for the list of migratable targets to update:

    Table 23-4 The Recommended Migratable Targets to Update

    Cluster to Scale Out Migratable Target to Update Migration Policy

    WSM-PM_Cluster

    Not applicable

    Not applicable

    SOA_Cluster

    WLS_SOAn (migratable)

    Auto-Migrate Failure Recovery Services

    OIM_Cluster

    WLS_OIMn (migratable)

    Auto-Migrate Failure Recovery Services

    OAM_Cluster

    Not applicable

    Not applicable

    AMA_Cluster

    Not applicable

    Not applicable

    1. Go to Environment > Migratable Servers.
    2. Click Lock & Edit.
    3. Click WLS_XYZ3 (migratable).
    4. Go to the tab Configuration > Migration.
    5. Change the Service Migration Policy to the value listed in the table.
    6. Leave the Constrained Candidate Server list blank in case there are chosen servers. If no servers are selected, you can migrate this migratable target to any server in the cluster.
    7. Click Save, and then click Activate Changes.
  11. Update the Constrained Candidate Server list in the existing migratable servers in the cluster that you are scaling because by default they are pre-populated with only WLS_XYZ1 and WLS_XYZ2 servers.
    1. Go to each migratable server.
    2. Go to the tab Configuration > Migration > Constrained Candidate Server.

      You can leave the server list blank to make these migratable targets migrate to any server in this cluster, including the newly created managed server.

      Use the following table to identify the migratable servers that have to be updated:

      Table 23-5 The Existing Migratable Targets to Update

      Cluster to Scale Out Existing Migratable Target to Update Constrained Candidate Server

      WSM-PM_Cluster

      Not applicable

      Leave empty

      SOA_Cluster

      WLS_SOA1 (migratable)

      WLS_SOA2 (migratable)

      Leave empty

      OIM_Cluster

      WLS_OIM1 (migratable)

      WLS_OIM2 (migratable)

      Leave empty

    3. Click Save, and then click Activate Changes. Although you need to restart the servers for this change to be effective, you can do a unique restart later, after you complete all the required configuration changes.
  12. Create the required persistent stores for the JMS servers.
    1. Log in to WebLogic Console and go to Services > Persistent Stores.
    2. Click New and select Create JDBCStore.

    Use the following table to create the required persistent stores:

    Note:

    The number in names and prefixes in the existing resources were assigned automatically by the Configuration Wizard during the domain creation. For example:

    • BPMJMSJDBCStore_auto_1 — soa_1

    • BPMJMSJDBCStore_auto_2 — soa_2

    • JDBCStore-OIM_auto_1 - oim1

    • JDBCStore-OIM_auto_2 - oim2

    • SOAJMSJDBCStore_auto_1 - soa_1

    • SOAJMSJDBCStore_auto_2 - soa_2

    • UMSJMSJDBCStore_auto_1 - soa_1

    • UMSJMSJDBCStore_auto_2 - soa_2

    So review the existing prefixes and select a new and unique prefix and name for each new persistent store.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the scaled tag and are shown here as an example.

    Table 23-6 The New Resources Qualified with the Scaled Tag

    Cluster to Scale Out Persistent Store Prefix Name Data Source Target

    WSM-PM_Cluster

    Not applicable

    Not applicable

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSJDBCStore_soa_scaled_3

    soaums_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    SOAJMSJDBCStore_ soa_scaled_3

    soajms_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    BPMJMSJDBCStore_ soa_scaled_3

    soabpm_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    (only when you use Insight) ProcMonJMSJDBCStore_soa_scaled_3

    soaprocmon_scaled_3

    WLSSchemaDataSource

    WLS_SOA3 (migratable)

    OIM_Cluster

    NA

    JDBCStore-OIM_scaled_3

    WLSSchemaDataSource

    WLS_OIM3 (migratable)

  13. Create the required JMS Servers for the new managed server.
    1. Go to WebLogic Console > Services > Messaging > JMS Servers.
    2. Click Lock & Edit.
    3. Click New.

    Use the following table to create the required JMS Servers. Assign to each JMS Server the previously created persistent stores:

    Note:

    The number in the names of the existing resources are assigned automatically by the Configuration Wizard during domain creation.

    So review the existing JMS server names and select a new and unique name for each new JMS server.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the product_scaled_N tag and are shown here as an example.

    Cluster to Scale Out JMS Server Name Persistent Store Target

    WSM-PM_Cluster

    Not applicable

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSServer_soa_scaled_3

    UMSJMSJDBCStore_soa_scaled_3

    WLS_SOA3 (migratable)

    SOAJMSServer_ soa_scaled_3

    SOAJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    BPMJMSServer_ soa_scaled_3

    BPMJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    Not applicable

    ProcMonJMSJDBCStore_soa_scaled_3

    WLS_SOA3 (migratable)

    OIM_Cluster

    OIMJMSServer_scaled_3

    JDBCStore-OIM_scaled_3

    WLS_OIM3 (migratable)

    Note:

    (*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets.
  14. Update the SubDeployment Targets for JMS Modules (if applicable) to include the recently created JMS servers.
    1. Expand the Services > Messaging > JMS Modules.
    2. Click the JMS module. For example: BPMJMSModule.

      Use the following table to identify the JMS modules to update, depending on the cluster that you are scaling out:

      Table 23-7 The JMS Modules to Update

      Cluster to Scale Out JMS Module to Update JMS Server to Add to the Subdeployment

      WSM-PM_Cluster

      Not applicable

      Not applicable

      SOA_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_soa_scaled_3

      SOAJMSModule

      SOAJMSServer_soa_scaled_3

      BPMJMSModule

      BPMJMSServer_soa_scaled_3

      (Only if you have configured Insight) ProcMonJMSModule *

      ProcMonJMSServer_soa_scaled_3

      OIM_Cluster

      OIMJMSModule

      OIMJMSServer_scaled_3

      (*) Some modules (UMSJMSystemResource, ProcMonJMSModule) may be targeted to more than one cluster. Ensure that you update the appropriate subdeployment in each case.
    3. Go to Configuration > Subdeployment.
    4. Add the corresponding JMS Server to the existing subdeployment.

      Note:

      The subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).
    5. Click Save, and then click Activate Changes.
  15. Start the Node Manager in the Managed Server Domain Directory on OIMHOST3. Follow these steps to update and start the Node Manager from the Managed Server home
    1. Verify that the listen address in the nodemanager.properties file is set correctly, by completing the following steps
      • Change directory to the MSERVER_HOME binary directory:

        cd MSERVER_HOME/nodemanager

      • Open the nodemanager.properties file for editing.

      • Validate the ListenAddress property to the correct hostname as follows:

        OIMHOST3: ListenAddress=OIMHOST3

      • Update the ListenPort property with the correct Listen Port details.

      • Make sure that QuitEnabled is set to true. If this line is not present in the nodemanager.properties file, add the following line:

        QuitEnabled=true

    2. Change directory to the MSERVER_HOME binary directory:
      cd MSERVER_HOME /bin
    3. Use the following command to start the Node Manager:

      nohup ./startNodeManager.sh > $MSERVER_HOME/nodemanager/nodemanager.out 2>&1 &

      For information about additional Node Manager configuration options, see Administering Node Manager for Oracle WebLogic Server.

  16. Restart all servers (except the newly created server) for the previous changes to be effective. You can restart in a rolling manner to eliminate downtime.
  17. The configuration is finished. Now sign in to the new host and run the pack command to create a template pack, as follows:
    cd ORACLE_COMMON_HOME/common/bin
    ./pack.sh -managed=true 
              -domain=ASERVER_HOME
              -template=/full_path/scaleout_domain.jar
              -template_name=scaleout_domain_template
              -log_priority=DEBUG -log=/tmp/pack.log  

    In this example:

    • Replace ASERVER_HOME with the actual path to the domain directory that you created on the shared storage device.

    • Replace full_path with the complete path to the location where you want to create the domain template jar file. You need to reference this location when you copy or unpack the domain template jar file. Oracle recommends that you choose a shared volume other than ORACLE_HOME, or write to /tmp/ and copy the files manually between servers.

      You must specify a full path for the template jar file as part of the -template argument to the pack command:
      SHARED_CONFIG_DIR/domains/template_filename.jar
    • scaleout_domain.jar is a sample name for the jar file that you are creating, which contains the domain configuration files.

    • scaleout_domain_template is the label that is assigned to the template data stored in the template file.

  18. Run the unpack command on SOAHOSTN to unpack the template in the managed server domain directory, as follows:
    cd ORACLE_COMMON_HOME/common/bin
    ./unpack.sh -domain=MSERVER_HOME
                -overwrite_domain=true
                -template=/full_path/scaleout_domain.jar
                -log_priority=DEBUG
                -log=/tmp/unpack.log
                -app_dir=APPLICATION_HOME

    In this example:

    • Replace MSERVER_HOME with the complete path to the domain home to be created on the local storage disk. This is the location where the copy of the domain is unpacked.

    • Replace /full_path/scaleout_domain.jar with the complete path and file name of the domain template jar file that you created when you ran the pack command to pack up the domain on the shared storage device

    • Replace APPLICATION_HOME with the complete path to the Application directory for the domain on shared storage. See File System and Directory Variables Used in This Guide.

  19. When scaling out the OAM_Cluster:

    Register the new Managed Server with Oracle Access Management by doing the following:

    1. Log in to the Access Management console at http://IADADMIN.example.com/oamconsole as the user you specified during response file creation.

    2. Go to the Configuration tab.

    3. Click Server Instances.

    4. Select Create from the Actions menu.

    5. Enter the following information:

      • Server Name: WLS_OAM3

      • Host: Host that the server runs on

      • Port: Listen port that was assigned when the Managed Server was created

      • OAM Proxy Port: Port you want the Access Manager proxy to run on. This is unique for the host

      • Proxy Server ID: AccessServerConfigProxy

    6. Click Apply.

    7. Restart the WebLogic Administration Server.

    Add the newly created Access Manager server to all of the WebGate Profiles that might be using it, such as Webgate_IDM, Webgate_IDM_11g, and IAMSuiteAgent

    For example, to add the Access Manager server to Webgate_IDM, access the Access Management console at http://IADADMIN.example.com/oamconsole, and do the following:

    1. Log in as the Access Manager Administrative User.

    2. Go to the Application Security tab.

    3. Click Agents.

    4. Click Search. You should see the WebGate agent Webgate_IDM.

    5. Click the agent Webgate_IDM.

    6. Select Edit from the Actions menu.

    7. Click + in the Primary Server list (or the Secondary Server list if this is a secondary server).

    8. Select the newly created managed server from the Server list.

    9. Set Maximum Number of Connections to 10.

    10. Click Apply.

    11. Repeat the steps for all of the WebGate that are in use, and restart the new Managed Server.

  20. Start Node Manager on the new host.
    cd $NM_HOME
    nohup ./startNodeManager.sh > ./nodemanager.out 2>&1 &
  21. Start the new managed server.
  22. Update the web tier configuration to include the new server:
    1. If you are using OTD, log in to Enterprise Manager and update the corresponding origin pool, as explained in Creating the Required Origin Server Pools to add the new server to the pool.
    2. If you are using OHS, there is no need to add the new server to OHS. By default, the Dynamic Server List is used, which means that the list of servers in the cluster is automatically updated when a new node becomes part of the cluster. So, adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee the initial contact in case of a partial outage.

      If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server is up, update the WebLogicCluster directive to include the new server.

      For example:

      <Location /osb>
       WLSRequest ON
       WebLogicCluster SOAHOST1:8011,SOAHOST2:8011,SOAHOST3:8011
       WLProxySSL ON
       WLProxySSLPassThrough ON
      </Location>
Verifying the Scale Out of Static Clusters
After scaling out and starting the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      https://igdinternal.example.com:7777/soa-infra
    2. Check that there is activity in the new server also:
      Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
    3. You can also verify that the web sessions are created in the new server:
      • Go to Cluster > Deployments.

      • Expand soa-infra, click soa-infra Web application.

      • Go to Monitoring to check the web sessions in each server.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      WSM-PM_Cluster

      http://igdinternal.example.com/wsm-pm

      wsm-pm > wsm-pm

      SOA_Cluster

      http://igdinternal.example.com:7777/soa-infra

      soa-infra > soa-infra

      OAM_Cluster

      http://login.example.com/oam

      AMA_Cluster

      http://iadadmin.example.com/access

      OIM_Cluster

      https://prov.example.com/identity

      Note:

      When validating OAM you will see an OAM server error presented by OAM. This is normal, the test is to show that OAM is being accessed. The error will disappear when appropriate arguments are passed to the server as part of the normal operations.
  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. Go to JMS Servers.
    2. Click JMS Server > Monitoring.
  3. Verify the service migration, as described in Validating Automatic Service Migration in Static Clusters.
Scaling in the Topology for Static Clusters
To scale in the topology for a static cluster:
  1. Stop the managed server that you want to delete.
  2. If you are using automatic service migration, verify that the resources corresponding to that server are not present in the remaining servers before you shrink and scale in the cluster. In case of using exactly-once migration policies, stop all the servers
  3. Use the OAM Console to remove OAM Servers from the webgate configuration (If removing OAM Servers).
    1. Log in to the Access Management console at http://iadadmin.example.com/oamconsole as the user you specified during response file creation.
    2. Go to the System Configuration tab.
    3. Click Server Instances.
    4. Select the server instances you wish to remove and select delete from the actions menu.
    5. Click Apply.
  4. Use the Oracle WebLogic Server Administration Console to delete the migratable target that is used by the server that you want to delete.
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Cluster > Migratable Target.
    3. Select the migratable target that you want to delete.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  5. Use the Oracle WebLogic Server Administration Console to delete the new server:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Servers.
    3. Select the server that you want to delete.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.

    Note:

    If migratable target was not deleted in the previous step, you get the following error message:

    The following failures occurred: --MigratableTargetMBean WLS_SOA3_soa-failure-recovery (migratable) does not have a preferred server set.
    Errors must be corrected before proceeding.
  6. Use the Oracle WebLogic Server Administration Console to update the subdeployment of each JMS Module that is used by the cluster that you are shrinking.

    Use the following table to identify the module for each cluster and perform this action for each module:

    Cluster to Scale in Persistent Store JMS Server to Delete from the Subdeployment

    WSM-PM_Cluster

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSSystemResource

    SOAJMSModule

    BPMJMSModule

    UMSJMSServer_soa_scaled_3

    SOAJMSServer_soa_scaled_3

    BPMJMSServer_soa_scaled_3

    OIM_Cluster

    OIMJMSModule

    OIMJMSServer_scaled_3

    1. Click Lock & Edit.
    2. Go to Domain > Services > Messaging > JMS Modules.
    3. Click the JMS module.
    4. Click subdeployment.
    5. Unselect the JMS server that was created for the deleted server.
    6. Click Save.
    7. Click Activate Changes.
  7. Use the Oracle WebLogic Server Administration Console to delete the JMS servers:
    1. Click Lock & Edit.
    2. Go to Domain > Services > Messaging > JMS Servers.
    3. Select the JMS Servers that you created for the new server.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  8. Use the Oracle WebLogic Server Administration Console to delete the JMS persistent stores:
    1. Click Lock & Edit.
    2. Go to Domain > Services > Persistent Stores.
    3. Select the Persistent Stores that you created for the new server.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  9. Update the web tier configuration to remove references to the new server.

Scaling Out the Topology for Dynamic Clusters

This section lists the prerequisites, explains the procedure to scale out the topology with dynamic clusters, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink).

Prerequisites for Scaling Out

Before you perform a scale out of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with managed servers already running.

  • The new node can access the existing home directories for WebLogic Server and Governance. Use the existing installations in shared storage. You do not need to install WebLogic Server or IDM binaries in a new location. However, you do need to run pack and unpack commands to bootstrap the domain configuration in the new node.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Out a Dynamic Cluster
The steps provided in this procedure use the IAM EDG topology as a reference. Initially there are two application tier hosts (OIMHOST1 and OIMHOST2), each running one managed server of each cluster. A new host OIMHOST3 is added to scale up the clusters with a third managed server. WLS_XYZn is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names are WLS_SOA3 and WLS_OIM3.

To scale out the topology in a dynamic cluster, complete the following steps:

  1. On the new node, mount the existing shared volumes for FMW Home (Binaries1), shared config (sharedConfig), and runtime (runTime), as described in Summary of the Shared Storage Volumes in an Enterprise Deployment

    Note:

    Be sure to mount the file systems associated with the Identity Governance binaries.
  2. Locate the inventory in the shared directory (for example, /u01/oracle/products/oraInventory), per Oracle’s recommendation. So you do not need to attach any home, but you may want to execute the script: /u01/oracle/products/oraInventory/createCentralInventory.sh.

    This command creates and updates the local file /etc/oraInst.loc in the new node to point it to the oraInventory location.

    If there are other inventory locations in the new host, you can still use them, but /etc/oraInst.loc file must be updated accordingly for updates in each case.

  3. Update the /etc/hosts files to add the name of the new host (unless you are using DNS), as described in Verifying IP Addresses and Host Names in DNS or Hosts File. If you are using host aliases such as OIMHOST, then ensure that you add an entry for that host.

    For example:

    10.229.188.204 host1-vip.example.com host1-vip ADMINVHN
    10.229.188.205 host1.example.com host1 OIMHOST1
    10.229.188.206 host2.example.com host2 OIMHOST2
    10.229.188.207 host3.example.com host3 WEBHOST1
    10.229.188.208 host4.example.com host4 WEBHOST2
    10.229.188.209 host5.example.com host5 OIMHOST3 
  4. Configure a per domain Node Manager in the new node, as described in https://docs.oracle.com/pls/topic/lookup?ctx=en/middleware/fusion-middleware/12.2.1.3/imedg&id=SOEDG-GUID-38510079-BCDE-4888-877E-6BF7580B7181.
  5. Log in to the Oracle WebLogic Administration Console to create a new machine for the new node.
  6. Update the machine's Node Manager address to map the IP of the node that is being used for scale out.
  7. Use the Oracle WebLogic Server Administration Console to increase the dynamic cluster to include a new managed server:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Clusters.
    3. Select the cluster to want to scale out.
    4. Go to Configuration > Servers.
    5. Set Dynamic Cluster Size to 3. By default, the cluster size is 2.

      Note:

      In case of scaling-out to more than three servers, we also need to update Number of servers in cluster Address that is 3 by default. Although Oracle recommends you to use the cluster syntax for t3 calls, the cluster address is used if calling from external elements via t3, for EJB stubs, and so on.

  8. Sign in to OIMHOST1 and run the pack command to create a template pack as follows:
    cd ORACLE_COMMON_HOME/common/bin
    ./pack.sh -managed=true
              -domain=ASERVER_HOME
              -template=/full_path/scaleout_domain.jar
              -template_name=scaleout_domain_template
              -log_priority=DEBUG -log=/tmp/pack.log  

    In this example:

    • Replace ASERVER_HOME with the actual path to the domain directory that you created on the shared storage device.

    • Replace full_path with the complete path to the location where you want to create the domain template jar file. You need to reference this location when you copy or unpack the domain template jar file. Oracle recommends that you choose a shared volume other than ORACLE_HOME, or write to /tmp/ and copy the files manually between servers.

      You must specify a full path for the template jar file as part of the -template argument to the pack command:
      SHARED_CONFIG_DIR/domains/template_filename.jar
    • scaleout_domain.jar is a sample name for the jar file that you are creating, which contains the domain configuration files.

    • scaleout_domain_template is the label that is assigned to the template data stored in the template file.

  9. Run the unpack command on SOAHOSTN to unpack the template in the managed server domain directory, as follows:
    cd ORACLE_COMMON_HOME/common/bin
    ./unpack.sh -domain=MSERVER_HOME
                -overwrite_domain=true
                -template=/full_path/scaleout_domain.jar
                -log_priority=DEBUG
                -log=/tmp/unpack.log
                -app_dir=APPLICATION_HOME

    In this example:

    • Replace MSERVER_HOME with the complete path to the domain home to be created on the local storage disk. This is the location where the copy of the domain is unpacked.

    • Replace /full_path/scaleout_domain.jar with the complete path and file name of the domain template jar file that you created when you ran the pack command to pack up the domain on the shared storage device

    • Replace APPLICATION_HOME with the complete path to the Application directory for the domain on shared storage. See File System and Directory Variables Used in This Guide.

  10. Start Node Manager on the new host.
    cd $NM_HOME
    nohup ./startNodeManager.sh > ./nodemanager.out 2>&1 &
  11. Start the new managed Server.
  12. Update the web tier configuration to include this new server:
    1. If using OTD, log in to Enterprise Manager and update the corresponding origin pool, as explained in Creating the Required Origin Server Pools to add the new server to the pool.
    2. If using OHS, there is no need to add the new server to OHS.

      By default, the Dynamic Server list is used, which means that the list of servers in the cluster is automatically updated when a new node becomes part of the cluster. So adding the new node to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee the initial contact in the case of a partial outage.

      If there expected scenarios where the Oracle HTTP Server is restarted and only the new server would be up, update the WebLogicCluster directive to include the new server.

      For example:

      <Location /soa-infra>
       WLSRequest ON
       WebLogicCluster OIMHOST1:8011,OIMHOST2:8012,OIMHOST3:8013
       WLProxySSL ON
       WLProxySSLPassThrough ON
      </Location>
Verifying the Scale Out of Dynamic Clusters
After scaling out and starting the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      https://igdinternal.example.com:7777/soa-infra
    2. Check that there is activity in the new server also:
      Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
    3. You can also verify that the web sessions are created in the new server:
      • Go to Cluster > Deployments.

      • Expand soa-infra, click soa-infra Web application.

      • Go to Monitoring to check the web sessions in each server.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      WSM-PM_Cluster

      http://igdinternal.example.com/wsm-pm

      wsm-pm > wsm-pm

      SOA_Cluster

      http://igdinternal.example.com:7777/soa-infra

      soa-infra > soa-infra

      OAM_Cluster

      http://login.example.com/oam

      AMA_Cluster

      http://iadadmin.example.com/access

      OIM_Cluster

      https://prov.example.com/identity

      Note:

      When validating OAM you will see an OAM server error presented by OAM. This is normal, the test is to show that OAM is being accessed. The error will disappear when appropriate arguments are passed to the server as part of the normal operations.
  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. Go to JMS Servers.
    2. Click JMS Server > Monitoring.
  3. Verify the service migration, as described in Validating Automatic Service Migration in Dynamic Clusters.
Scaling in the Topology for Dynamic Clusters
To scale in the topology for a dynamic cluster:
  1. Stop the managed server that you want to delete.
  2. If you are using automatic service migration, verify that the singleton resources corresponding to that server are not present in the remaining servers before you shrink/scale in the cluster.
  3. Use the Oracle WebLogic Server Administration Console to reduce the dynamic cluster:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Clusters.
    3. Select the cluster that you want to scale in.
    4. Go to Configuration > Servers.
    5. Set the Dynamic Cluster size to 2.
  4. If you are using OSB, restart the Admin Server.

Scaling Up the Topology

When you scale up the topology, you add new managed servers to the existing hosts.

This section describes the procedures to scale up the topology with static and dynamic clusters.

Scaling Up the Topology for Static Clusters

This section lists the prerequisites, explains the procedure to scale up the topology with static clusters, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink).

You already have a node that runs a managed server that is configured with Fusion Middleware components. The node contains a WebLogic Server home and an Oracle Fusion Middleware IAMhome in shared storage. Use these existing installations and domain directories, to create the new managed servers. You do not need to install WLS or SOA binaries or to run pack and unpack because the new server is going to run in the existing node.

Prerequisites for Scaling Up

Before you perform a scale up of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with managed servers already running.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Up a Static Cluster

The IAM EDG topology has two different domains, one for OAM and one for OIG. Scaling Up is largely the same regardless of which cluster you are scaling. The example below refers to HOST1, HOST2 and HOST3. If you are scaling OAM then these hosts will equate to OAMHOST1, OAMHOST2 and OAMHOST3. If you are scaling OIM then these hosts will equate to OIMHOST1, OIMHOST2 and OIMHOST3.

The example below explains how to add a third managed server to the cluster that runs in HOST1. WLS_XYZn is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names are WLS_OAM3, WLS_AMA3, WLS_OIM4, WLS_SOA3, WLS_WSM3, and so on.

To scale up the cluster, complete the following steps:

  1. Use the Oracle WebLogic Server Administration Console to clone the first managed server in the cluster into a new managed server.
    1. In the Change Center section, click Lock & Edit.
    2. Go to Environment > Servers.
    3. Select the first managed server in the cluster to scale up and click Clone.
    4. Use Table 23-8 to set the correspondent name, listen address, and listen port depending on the cluster that you want to scale out. Note that the default listen port is increment by 1 to avoid binding conflicts with the managed server that is already created and running in the same host.
    5. Click the new managed server, and then select Configuration > General.
    6. Click Save, and then click Activate Changes.

    Table 23-8 List of Clusters that You Want to Scale Up

    Cluster to Scale Up Server to Clone New Server Name Server Listen Address Server Listen Port

    WSM-PM_Cluster

    WLS_WSM1

    WLS_WSMn

    OIMHOSTn

    7011

    SOA_Cluster

    WLS_SOA1

    WLS_SOAn

    OIMHOSTn

    8001

    OIM_Cluster

    WLS_OIM1

    WLS_OIMn

    OIMHOSTn

    14000

    OAM_Cluster

    WLS_OAM1

    WLS_OAMn

    OAMHOSTn

    14100

    AMA_Cluster

    WLS_AMA1

    WLS_AMAn

    OAMHOSTn

    14150

    Note:

     Port numbers must be unique on a given host, therefore the port numbers above have been incremented by 1 from the ports used by the existing managed servers on the host.
  2. Update the deployment Staging Directory Name of the new server, as described in Modifying the Upload and Stage Directories to an Absolute Path.
  3. Create a new key certificate and update the private key alias of the server, as described in Enabling SSL Communication Between the Middle Tier and the Hardware Load Balancer.
  4. By default, the cloned server uses default store for TLOGs. If the rest of the servers in the cluster that you are scaling-out are using TLOGs in JDBC persistent store, update the TLOG persistent store of the new managed server:

    Use the following table to identify the clusters that use JDBC TLOGs by default:

    Table 23-9 The Name of Clusters that Use JDBC TLOGs by Default

    Cluster to Scale Up New Server Name TLOG Persistent Store

    WSM-PM_Cluster

    WLS_WSMn

    Default (file)

    SOA_Cluster

    WLS_SOAn

    JDBC

    OIM_Cluster

    WLS_OIMn

    Default JDBC

    OAM_Cluster

    WLS_OAMn

    Not Applicable

    AMA_Cluster

    WLS_AMAn

    Not Applicable

    Complete the following steps
    1. Go to Environment > Servers > WLS_XYZn > Configuration > Services.
    2. In Transaction Log Store section, change Type to JDBC.
    3. Change Data Source to WLSSchemaDatasource.
    4. Click Save, and then click Activate Changes.
  5. If the cluster you are scaling up is configured for automatic service migration, update the JTA Migration Policy to the required value.

    Use the following table to identify the clusters for which you have to update the JTA Migration Policy:

    Table 23-10 The Recommended JTA Migration Policy for the Cluster to be Scaled Up

    Cluster to Scale Up New Server Name JTA Migration Policy

    WSM-PM_Cluster

    WLS_WSMn

    Manual

    SOA_Cluster

    WLS_SOAn

    Failure Recovery

    OAM_Cluster

    WLS_OAMn

    Not Applicable

    AMA_Cluster

    WLS_AMAn

    Not Applicable

    OIM_Cluster

    WLS_OIMn

    Failure Recovery

    Complete the following steps:

    1. Go to Environment > Servers > WLS_XYZn > Configuration > Migration.
    2. Use Table 23-10 to set the recommended JTA Migration Policy depending on the cluster that you want to scale out.
    3. Click Save, and then click Activate Changes.
    4. For the rest of the servers already existing in the cluster, update the list of JTA candidate servers for JTA migration to include the new server.
      • Go to Environment > Servers > server > Configuration > Migration.

      • Go to JTA Candidate Servers: leave the list empty (leave it empty because all servers in the cluster are JTA candidate servers).

      • Click Save, and then click Activate Changes. Although you need to restart the servers for this change to be effective, you can do a unique restart later, after you complete all the required configuration changes.

  6. If the cluster you are scaling up is configured for automatic service migration, use the Oracle WebLogic Server Administration Console to update the automatically created WLS_XYZn (migratable) with the recommended migration policy, because by default it is set to Manual Service Migration Only.

    Use the following table for the list of migratable targets to update:

    Table 23-11 The Recommended Migratable Targets to Update

    Cluster to Scale Up Migratable Target to Update Migration Policy

    WSM-PM_Cluster

    Not applicable

    Not applicable

    SOA_Cluster

    WLS_SOAn (migratable)

    Auto-Migrate Failure Recovery Services

    OIM_Cluster

    WLS_OIMn (migratable)

    Auto-Migrate Failure Recovery Services

    OAM_Cluster

    Not applicable

    Not applicable

    AMA_Cluster

    Not applicable

    Not applicable

    1. Go to Environment > Cluster > Migratable Servers.
    2. Click Lock and Edit.
    3. Click WLS_XYZ3 (migratable).
    4. Go to the Configuration tab and then Migration.
    5. Change the Service Migration Policy to the value listed in the table.
    6. Leave the Constrained Candidate Server list blank in case there are chosen servers. If no servers are selected, you can migrate this migratable target to any server in the cluster.
    7. Click Save, and then click Activate Changes.
  7. Update the Constrained Candidate Server list in the existing migratable servers in the cluster that you are scaling because by default they are pre-populated with only WLS_XYZ1 and WLS_XYZ2 servers.

    Use the following table to identify the migratable servers that have to be updated:

    Table 23-12 The Existing Migratable Targets to Update

    Cluster to Scale Up Existing Migratable Target to Update Constrained Candidate Server

    WSM-PM_Cluster

    Not applicable

    Leave empty

    SOA_Cluster

    WLS_SOA1 (migratable)

    WLS_SOA2 (migratable)

    Leave empty

    OIM_Cluster

    WLS_OIM1 (migratable)

    WLS_OIM1 (migratable)

    Leave empty

    1. Go to each migratable server.
    2. Go to the tab Configuration > Migration > Constrained Candidate Server.

      You can leave the server list blank to make these migratable targets migrate to any server in this cluster, including the newly created managed server.

    3. Click Save and Activate Changes. Although you need to restart the servers for this change to be effective, you can do a unique restart later, after you complete all the required configuration changes.
  8. Create the required persistent stores for the JMS servers.
    1. Sign in to WebLogic Console and go to Services > Persistent Stores.
    2. Click New and select Create JDBCStore.

    Use the following table to create the required persistent stores:

    Note:

    The number in the names and prefixes in the existing resources were assigned automatically by the Configuration Wizard during the domain creation.

    For example:
    BPMJMSJDBCStore_auto_1 — soa_1
    BPMJMSJDBCStore_auto_2 — soa_2
    JDBCStore-OIM_auto_1 - oim1
    JDBCStore-OIM_auto_2 - oim2
    SOAJMSJDBCStore_auto_1 - soa_1
    SOAJMSJDBCStore_auto_2 - soa_2
    UMSJMSJDBCStore_auto_1 - soa_1
    UMSJMSJDBCStore_auto_2 - soa_2

    Review the existing prefixes and select a new and unique prefix and name for each new persistent store.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the scaled tag and are shown here as an example.

    Table 23-13 The New Resources Qualified with the Scaled Tag

    Cluster to Scale Up Persistent Store Prefix Name Data Source Target

    WSM-PM_Cluster

    Not applicable

    Not applicable

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSJDBCStore_soa_scaled_3

    soaums_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    SOAJMSJDBCStore_ soa_scaled_3

    soajms_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    BPMJMSJDBCStore_ soa_scaled_3

    soabpm_scaled_3

    WLSSchemaDataSourc

    WLS_SOA3 (migratable)

    (only when you use Insight) ProcMonJMSJDBCStore_soa_scaled_3

    soaprocmon_scaled_3

    WLSSchemaDataSource

    WLS_SOA3 (migratable)

    OIM_Cluster

    JDBCStore-OIM_scaled_3

    oimjms_scaled_3

    WLSSchemaDataSource

    WLS_OIM3 (migratable)

  9. Create the required JMS Servers for the new managed server.
    1. Go to WebLogic Console > Services > Messaging > JMS Servers.
    2. Click Lock and Edit.
    3. Click New.

    Use the following table to create the required JMS Servers. Assign to each JMS Server the previously created persistent stores:

    Note:

    The number in the names of the existing resources are assigned automatically by the Configuration Wizard during domain creation. Review the existing JMS server names and select a new and unique name for each new JMS server. To avoid naming conflicts and simplify the configuration, new resources are qualified with the product_scaled_N tag and are shown here as an example.

    Cluster to Scale Up JMS Server Name Persistent Store Target

    WSM-PM_Cluster

    Not applicable

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSServer_soa_scaled_3

    UMSJMSJDBCStore_soa_scaled_3

    WLS_SOA3 (migratable)

    SOAJMSServer_ soa_scaled_3

    SOAJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    BPMJMSServer_ soa_scaled_3

    BPMJMSJDBCStore_ soa_scaled_3

    WLS_SOA3 (migratable)

    (only when you use Insight) ProcMonJMSServer_soa_scaled_3

    ProcMonJMSJDBCStore_soa_scaled_3

    WLS_SOA3 (migratable)

    OIM_Cluster

    OIMJMSServer_scaled_3

    JDBCStore-OIM_scaled_3

    WLS_OIM3 (migratable)

  10. Update the SubDeployment Targets for JMS Modules (if applicable) to include the recently created JMS servers.
    1. Expand the Services>Messaging>JMS Modules.
    2. Click the JMS module. For example: BPMJMSModule.

      Use the following table to identify the JMS modules to update depending on the cluster that you are scaling up:

      Cluster to Scale-up JMS Module to Update JMS Server to Add to the Subdeployment

      WSM-PM_Cluster

      Not applicable

      Not applicable

      SOA_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_soa_scaled_3

      SOAJMSModule

      SOAJMSServer_soa_scaled_3

      BPMJMSModule

      BPMJMSServer_soa_scaled_3

      (Only if you have configured Insight) ProcMonJMSModule *

      ProcMonJMSServer_soa_scaled_3

      OIM_Cluster

      OIMJMSModule

      OIMJMSServer_scaled_3

      (*) Some modules (UMSJMSystemResource, ProcMonJMSModule) may be targeted to more than one cluster. Ensure that you update the appropriate subdeployment in each case.
    3. Go to Configuration > Subdeployment.
    4. Add the corresponding JMS Server to the existing subdeployment.

      Note:

      The Subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1 and WLS_SOA2).

    5. Click Save, and then click Activate Changes.
  11. Restart all servers (except the newly created server) for all the previous changes to be effective. You can restart in a rolling manner to eliminate downtime.
  12. Start the new managed server.
  13. Update the web tier configuration to include this new server:
    • If you are using OTD, login to Enterprise Manager and update the corresponding origin pool as explained in Creating the Required Origin Server Pools to add the new server to the pool.

    • If you are using OHS, there is no need to add the new server to OHS. By default Dynamic Server List is used, which means that the list of the servers in the cluster is automatically updated when a new node become part of the cluster, so adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee initial contact in case of a partial outage.

      If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server would be up, update the WebLogicCluster directive to include the new server.

      <Location /soa-infra>
        WLSRequest ON
        WebLogicCluster OIMHOST1:8011,OIMHOST2:8012,OIMHOST3:8013
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
      
Verifying the Scale Up of Static Clusters
After scaling up and starting the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      https://igdinternal.example.com:7777/soa-infra
    2. Check that there is activity in the new server also:
      Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
    3. You can also verify that the web sessions are created in the new server:
      • Go to Cluster > Deployments.

      • Expand soa-infra, click soa-infra Web application.

      • Go to Monitoring to check the web sessions in each server.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      WSM-PM_Cluster

      http://igdinternal.example.com/wsm-pm

      wsm-pm > wsm-pm

      SOA_Cluster

      http://igdinternal.example.com:7777/soa-infra

      soa-infra > soa-infra

      OAM_Cluster

      http://login.example.com/oam

      AMA_Cluster

      http://iadadmin.example.com/access

      OIM_Cluster

      https://prov.example.com/identity

      Note:

      When validating OAM you will see an OAM server error presented by OAM. This is normal, the test is to show that OAM is being accessed. The error will disappear when appropriate arguments are passed to the server as part of the normal operations.
  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. Go to JMS Servers.
    2. Click JMS Server > Monitoring.
  3. Verify the service migration, as described in Validating Automatic Service Migration in Static Clusters.
Scaling in the Topology for Static Clusters
To scale in the topology for a static cluster:
  1. Stop the managed server that you want to delete.
  2. If you are using automatic service migration, verify that the resources corresponding to that server are not present in the remaining servers before you shrink and scale in the cluster. In case of using exactly-once migration policies, stop all the servers
  3. Use the OAM Console to remove OAM Servers from the webgate configuration (If removing OAM Servers).
    1. Log in to the Access Management console at http://iadadmin.example.com/oamconsole as the user you specified during response file creation.
    2. Go to the System Configuration tab.
    3. Click Server Instances.
    4. Select the server instances you wish to remove and select delete from the actions menu.
    5. Click Apply.
  4. Use the Oracle WebLogic Server Administration Console to delete the migratable target that is used by the server that you want to delete.
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Cluster > Migratable Target.
    3. Select the migratable target that you want to delete.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  5. Use the Oracle WebLogic Server Administration Console to delete the new server:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Servers.
    3. Select the server that you want to delete.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.

    Note:

    If migratable target was not deleted in the previous step, you get the following error message:

    The following failures occurred: --MigratableTargetMBean WLS_SOA3_soa-failure-recovery (migratable) does not have a preferred server set.
    Errors must be corrected before proceeding.
  6. Use the Oracle WebLogic Server Administration Console to update the subdeployment of each JMS Module that is used by the cluster that you are shrinking.

    Use the following table to identify the module for each cluster and perform this action for each module:

    Cluster to Scale in Persistent Store JMS Server to Delete from the Subdeployment

    WSM-PM_Cluster

    Not applicable

    Not applicable

    SOA_Cluster

    UMSJMSSystemResource

    SOAJMSModule

    BPMJMSModule

    UMSJMSServer_soa_scaled_3

    SOAJMSServer_soa_scaled_3

    BPMJMSServer_soa_scaled_3

    OIM_Cluster

    OIMJMSModule

    OIMJMSServer_scaled_3

    1. Click Lock & Edit.
    2. Go to Domain > Services > Messaging > JMS Modules.
    3. Click the JMS module.
    4. Click subdeployment.
    5. Unselect the JMS server that was created for the deleted server.
    6. Click Save.
    7. Click Activate Changes.
  7. Use the Oracle WebLogic Server Administration Console to delete the JMS servers:
    1. Click Lock & Edit.
    2. Go to Domain > Services > Messaging > JMS Servers.
    3. Select the JMS Servers that you created for the new server.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  8. Use the Oracle WebLogic Server Administration Console to delete the JMS persistent stores:
    1. Click Lock & Edit.
    2. Go to Domain > Services > Persistent Stores.
    3. Select the Persistent Stores that you created for the new server.
    4. Click Delete.
    5. Click Yes.
    6. Click Activate Changes.
  9. Update the web tier configuration to remove references to the new server.

Scaling Up the Topology for Dynamic Clusters

This section lists the prerequisites, explains the procedure to scale out the topology with dynamic clusters, describes the steps to verify the scale-up process, and finally the steps to scale down (shrink).

You already have a node that runs a managed server that is configured with Fusion Middleware components. The node contains a WebLogic Server home and an Oracle Fusion Middleware IAM home in shared storage. Use these existing installations and domain directories, to create the new managed servers. You do not need to install WLS or SOA binaries or to run pack and unpack commands, because the new server is going to run in the existing node.

Prerequisites for Scaling Up

Before performing a scale up of the topology, you must ensure that you meet the following prerequisites:

  • The starting point is a cluster with managed servers already running.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Up a Dynamic Cluster

Use the IAM EDG topology as a reference, with two application tier hosts (OIMHOST1 and OIMHOST2), each running one managed server of each cluster. The example explains how to add a third managed server to the cluster that runs in OIMHOST1. WLS_XYZn is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names will be WLS_SOA3 and WLS_OIM3.

To scale up the cluster, complete the following steps:

  1. In scale-up, there is no need of adding a new machine to the domain as the new server would be added to an existing machine.

    If the CalculatedMachineNames attribute is set to true, then the MachineNameMatchExpression attribute is used to select the set of machines used for the dynamic servers. Assignments are made by using a round-robin algorithm.

    This following table lists examples of machine assignments in a dynamic cluster.

    Table 23-14 Examples of machine assignments in a dynamic cluster

    Machines in Domain MachineNameMatchExpression Configuration Dynamic Server Machine Assignments

    OIMHOST1, OIMHOST2

    OIMHOST*

    dyn-server-1: OIMHOST1

    dyn-server-2: OIMHOST2

    dyn-server-3: OIMHOST1

    dyn-server-4: OIMHOST2

    ...

    See https://docs.oracle.com/middleware/1212/wls/CLUST/dynamic_clusters.htm#CLUST678.

  2. If you are using OIMHOST{$id} as listen address in the template, update the /etc/hosts files to add the alias SOAHOSTN for the new node as described in the Verifying IP Addresses and Host Names in DNS or Hosts File.

    The new server WLS_XYZn listens in SOAHOSTn. This alias must be resolved to the corresponding IP address of the system host where the new managed server runs. See Table 23-14.

    Example:
    10.229.188.204 host1-vip.example.com host1-vip ADMINVHN
    10.229.188.205 host1.example.com host1 OIMHOST1
    10.229.188.206 host2.example.com host2 OIMHOST2
    10.229.188.207 host3.example.com host3 WEBHOST1
    10.229.188.208 host4.example.com host4 WEBHOST2
    10.229.188.209 host5.example.com host5 OIMHOST3 

    If you are using {$dynamic-hostname} in the listen address of the template, the new server WLS_xYZn listens in the address defined for the JAVA property dynamic-hostname . In this case, adding aliases to /etc/hosts file is not necessary when you scale up the dynamic cluster. See Configuring Listen Addresses in Dynamic Cluster Server Templates.

  3. Use the Oracle WebLogic Server Administration Console to increase the dynamic cluster to include a new managed server:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Clusters.
    3. Select the cluster to want to scale out.
    4. Go to Configuration > Servers.
    5. Set Dynamic Cluster Size to 3. By default, the cluster size is 2.
    6. Click Saveand then, click Activate Changes.

      Note:

      In case of scaling-out to more than three servers, we also need to update Number of servers in cluster Address that is 3 by default. Although, Oracle recommends that you use the cluster syntax for t3 calls, the cluster address is used if calling from external elements through t3, for EJB stubs, and so on.

  4. Update the web tier configuration to include this new server:
    • If you are using OTD, login to Enterprise Manager and update the corresponding origin pool as explained in Creating the Required Origin Server Pools to add the new server to the pool.

    • If you are using OHS, there is no need to add the new server to OHS. By default Dynamic Server List is used, which means that the list of the servers in the cluster is automatically updated when a new node become part of the cluster, so adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee initial contact in case of a partial outage.

      If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server would be up, update the WebLogicCluster directive to include the new server.

      For example:

      <Location /soa-infra>
        WLSRequest ON
        WebLogicCluster OIMHOST1:8011,OIMHOST2:8012,OIMHOST3:8013
        WLProxySSL ON
        WLProxySSLPassThrough ON
      </Location>
      
  5. Start the new managed server from the Oracle WebLogic Server.
  6. Verify that the newly created managed server is running.
Verifying the Scale Up of Dynamic Clusters
After you scale out and start the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      https://igdinternal.example.com:7777/soa-infra
    2. Check that there is activity in the new server also:
      Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
    3. You can also verify that the web sessions are created in the new server:
      • Go to Cluster > Deployments.

      • Expand soa-infra, click soa-infra Web application.

      • Go to Monitoring to check the web sessions in each server.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      WSM-PM_Cluster

      http://igdinternal.example.com/wsm-pm

      wsm-pm > wsm-pm

      SOA_Cluster

      http://igdinternal.example.com:7777/soa-infra

      soa-infra > soa-infra

      OAM_Cluster

      http://login.example.com/oam

      AMA_Cluster

      http://iadadmin.example.com/access

      OIM_Cluster

      https://prov.example.com/identity

      Note:

      When validating OAM you will see an OAM server error presented by OAM. This is normal, the test is to show that OAM is being accessed. The error will disappear when appropriate arguments are passed to the server as part of the normal operations.
  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. Go to JMS Servers.
    2. Click JMS Server > Monitoring.
  3. Verify the service migration, as described in Configuring Automatic Service Migration for Dynamic Clusters.
Scaling Down the Topology in a Dynamic Cluster
To scale down the topology in a dynamic cluster:
  1. Stop the managed server that you want to delete.
  2. If you are using Automatic Service Migration, verify that the singleton resources corresponding to that server are not present in the remaining servers before you scale down the cluster.
  3. Use the Oracle WebLogic Server Administration Console to reduce the dynamic cluster:
    1. Click Lock & Edit.
    2. Go to Domain > Environment > Clusters.
    3. Select the cluster to want to scale-down.
    4. Go to Configuration > Servers.
    5. Set again the Dynamic Cluster Size to 2.

OAM Specific Scaling Actions

This section briefs about registering any new managed servers with access manager and Webgate Agents.

In addition to the steps above to scale up or out the OAM managed servers. You must also register any new managed servers with Access Manager and Webgate Agents. To do this you need to perform the following steps.

Register new OAM Managed Servers

Register the new Managed Server with Oracle Access Management Access Manager. You now must configure the new Managed Server now as an Access Manager server. You do this from the Oracle Access Management Console. Proceed as follows:

  1. Log in to the Access Management console at http://IADADMIN.example.com/oamconsole as the user you specified during response file creation.
  2. Click the System Configuration tab.
  3. Click Server Instances.
  4. Select Create from the Actions menu.
  5. Enter the following information:
    • Server Name: WLS_OAM3

    • Host: Host that the server runs on

    • Port: Listen port that was assigned when the Managed Server was created

    • OAM Proxy Port: Port you want the Access Manager proxy to run on. This is unique for the host

    • Proxy Server ID: AccessServerConfigProxy

    • Mode: Set to same mode as existing Access Manager servers.

  6. Click Coherence tab.

    Set Local Port to a unique value on the host.

  7. Click Apply.
  8. Restart the WebLogic Administration Server.

Updating WebGate Profiles

Add the newly created Access Manager server to all WebGate Profiles that might be using it, such as Webgate_IDM, Webgate_IDM_12c, and IAMSuiteAgent

For example, to add the Access Manager server to Webgate_IDM, access the Access Management console at: http://IADADMIN.example.com/oamconsole

Then proceed as follows:

  1. Log in as the Access Manager Administrative User.
  2. Click the System Configuration tab.
  3. Expand Access Manager Settings - SSO Agents - OAM Agents.
  4. Click the open folder icon, then click Search.

    You should see the WebGate agent Webgate_IDM.

  5. Click the agent Webgate_IDM.
  6. Select Edit from the Actions menu.
  7. Click + in the Primary Server list (or the Secondary Server list if this is a secondary server).
  8. Select the newly created managed server from the Server list.
  9. Set Maximum Number of Connections to 10.
  10. Click Apply.

Repeat Steps 5 through 10 for Webgate_IDM_12c, IAMSuiteAgent, and all other WebGates that might be in use.