18 Scaling Procedures for an Enterprise Deployment

The scaling procedures for an enterprise deployment include scale out, scale in, scale up, and scale down. During a scale-out operation, you add managed servers to new nodes. You can remove these managed servers by performing a scale in operation. During a scale-up operation, you add managed servers to existing hosts. You can remove these servers by performing a scale-down operation.

This chapter describes the procedures to scale out/in and scale up/down static and dynamic clusters.

Scaling Out the Topology

When you scale out the topology, you add new managed servers to new nodes.

This section describes the procedures to scale out the Identity Management topology.

Prerequisites for Scaling Out

Before you perform a scale out of the topology, you must ensure that you meet the following requirements:

  • The starting point is a cluster with managed servers already running.

  • The new node can access the existing home directories for WebLogic Server and Governance. Use the existing installations in shared storage. You do not need to install WebLogic Server or IDM binaries in a new location. However, you do need to run pack and unpack commands to bootstrap the domain configuration in the new node.

  • It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.

Scaling Out a Cluster

The steps provided in this procedure use the IDM EDG topology as a reference. Initially there are two application tier hosts (OAMHOST1 and OAMHOST2, or OIGHOST1 and OIGHOST2), each running one managed server of each cluster. A new host HOST3 is added to scale up the clusters with a third managed server. ,<server>n is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names will be oam_server3, oam_policy_mgr3, oim_server3, and so on.

To scale out the cluster, complete the following steps:

  1. On the new node, mount the existing FMW Home, which should include the IDM installation and the domain directory. Ensure that the new node has access to this directory, similar to the rest of the nodes in the domain. Also, ensure that the FMW Home you mount is the one associated with the domain your are extending.
  2. Locate the inventory in the shared directory (for example, /u01/oracle/products/oraInventory), per Oracle’s recommendation. So you do not need to attach any home, but you may want to execute the script: /u01/oracle/products/oraInventory/createCentralInventory.sh.

    This command creates and updates the local file /etc/oraInst.loc in the new node to point it to the oraInventory location.

    If there are other inventory locations in the new host, you can use them, but /etc/oraInst.loc file must be updated accordingly for updates in each case.

  3. Update the /etc/hosts files to add the name of the new host (unless you are using DNS), as described in Verifying IP Addresses and Host Names in DNS or Hosts File. If you are using host aliases such as OIGHOST, then ensure that you add an entry for that host.

    For example:

    10.229.188.203 iadadminvhn.example.com iadadminvhn
    10.229.188.204 igdadminvhn.example.com igdadminvhn
    10.229.188.205 oighost1.example.com oighost1
    10.229.188.206 oighost2.example.com oighost2
    10.229.188.207 oighost3.example.com oighost3 
    10.229.188.208 oighost4.example.com oighost4 
    10.229.188.209 webhost1.example.com webhost1 
    10.229.188.210 webhost2.example.com webhost2 
    10.229.188.211 oamhost1.example.com oamhost1
    10.229.188.212 oamhost2.example.com oamhost2
    10.229.188.213 oamhost3.example.com oamhost3
    10.229.188.214 oamhost4.example.com oamhost4
  4. Log into the WebLogic Remote Console to create a new machine:
    1. Go to Environment and select Machines.
    2. Click New to create a new machine for the new node.
    3. Set Name to oighostn.
    4. Click the Node Manager tab.
    5. For End to End SSL set Type to SSL.
    6. Set Listen Address to oighostn.example.com
    7. Click Save and Commit changes in the Shopping Cart.
  5. Use the Oracle WebLogic Remote Console to clone the first managed server in the cluster into a new managed server.
    1. Go to Environment and select Servers.
    2. Click Create and in the Copy settings from another server, select the first managed server in the cluster to scale out and click Create.
    3. Select the first managed server in the cluster to scale out and click Create.
    4. Use Details of the Cluster to be Scaled Out to set the correspondent name, listen address, and SSL listen port depending on the cluster that you want to scale out.
    5. Click the new managed server, select Configuration, and then click General.
    6. Update the Machine from OIGHOST1 to OIGHOSTn.
    7. Update the Administration port for the server also to be consistent with other server in the cluster. Refer to the existing servers for their appropriate Administration Port.
    8. Click Save and Commit changes in the Shopping Cart.

    Table 18-1 Details of the Cluster to be Scaled Out

    Cluster to Scale Out Server to Clone New Server Name Server Listen Address Server Listen Port SSL Server Listen Port Server Admin Port

    SOA_Cluster

    soa_server1

    soa_servern

    oighostn.example.com

    7003

    7004

    9004

    OIM_Cluster

    oim_server1

    oim_servern

    oighostn.example.com

    14000

    14001

    9010

    OAM_Cluster

    oam_server1

    oam_servern

    oamhostn.example.com

    14100

    14101

    9508

    AMA_Cluster

    oam_policy_mgr1

    oam_policy_mgrn

    oamhostn.example.com

    14150

    14151

    9509

  6. Update the deployment Staging Directory Name of the new server, as described in Modifying the Upload and Stage Directories to an Absolute Path in an Enterprise Deployment.
  7. Create a new certificate as described in Obtaining SSL Certificates and update your domain certificate store as described in Update Server's Security Settings Using the Remote Console.
  8. Your new server’s keystore location and ssl configuration is carried over from the server copied (oim_server1) but you must update the password again (since it will be encrypted again for the new server) and the “Server private key alias” entry for this new server.

    Note:

    This is only required for End to End SSL.
    1. Navigate to Environment > Servers.

    2. Click the new server.

    3. Navigate to Security > Keystores.

    4. Update the Custom Identity Key Store Pass Phrase and Custom Trust Key Store Pass Phrase with the password you providied when you created the SSL identity and trust stores.

    5. Click the SSL tab under Security.

    6. Update the Server Private Key Pass Phrase with the password with the password you provided when you created the certificate.

    7. Add the listen address that you used in the previous step (certificate generation for the new server) as Server Private Key Alias.

  9. Update the TLOG JDBC persistent store of the new managed server:
    1. Log into the WebLogic Remote Console.
    2. Go to Environment and expand the Servers link on the navigation tree on the left.
    3. Click the new server <server>n.
    4. Click the Services > JTA tab.
    5. Ensure Transaction Log Store in JDBC is selected and change the Transaction Log Prefix name to TLOG_<server>n.
    6. The rest of the fields are carried over from the server copied (including the Datasource used for the JDBC store) WLSRuntimeSchemaDataSource.
    7. Click Save and Commit changes in the Shopping Cart.

    Use the following table to identify the clusters that use JDBC TLOGs by default:

    Table 18-2 The Name of Clusters that Use JDBC TLOGs by Default

    Cluster to Scale Out New Server Name TLOG Persistent Store

    SOA_Cluster

    soa_servern

    JDBC

    OIM_Cluster

    oim_servern

    Default JDBC

    OAM_Cluster

    oam_servern

    Not Applicable

    AMA_Cluster

    oam_policy_mgrn

    Not Applicable

  10. If the cluster that you are scaling out is configured for automatic service migration, update the JTA Migration Policy to the required value.
    1. Go to Environment and expand Servers. From the list of servers, select <server>n , click the JTA Migratable Target.
    2. Use scaling-procedures-enterprise-deployment.html#GUID-0F483475-83C5-4FFE-A5F2-0E9566DFC17E__GUID-888AED4E-E432-4DAA-AEFB-17E20927838B to ensure the recommended JTA Migration Policy depending on the cluster that you want to scale out is set.

      Table 18-3 The Recommended JTA Migration Policy for the Cluster to be Scaled Out

      Cluster to Scale Out New Server Name JTA Migration Policy

      SOA_Cluster

      soa_servern

      Failure Recovery

      OAM_Cluster

      oam_servern

      Not Applicable

      AMA_Cluster

      oam_policy_mgrn

      Not Applicable

      OIM_Cluster

      oim_servern

      Failure Recovery

  11. If the cluster you are scaling out is configured for automatic service migration, use the Oracle WebLogic Remote Console to update the automatically created <server>n (migratable) target with the recommended migration policy, because by default it is set to Manual Service Migration Only.

    Use the following table for the list of migratable targets to update:

    Table 18-4 The Recommended Migratable Targets to Update

    Cluster to Scale Out Migratable Target to Update Migration Policy

    SOA_Cluster

    soa_servern (migratable)

    Auto-Migrate Failure Recovery Services

    OIM_Cluster

    oim_servern (migratable)

    Auto-Migrate Failure Recovery Services

    OAM_Cluster

    Not applicable

    Not applicable

    AMA_Cluster

    Not applicable

    Not applicable

    1. Go to Environment then Migratable Targets.
    2. Click <server>3 (migratable), for example oim_server3.
    3. Change the Service Migration Policy to the value listed in the table.
    4. Leave the Constrained Candidate Server list blank in case there are chosen servers. If no servers are selected, you can migrate this migratable target to any server in the cluster.
    5. Click Save and Commit Changes in the Shopping Cart. Notice that a change from the default migration policy (manual) requires a restart of the existing servers in the cluster.
  12. Verify that the Constrained Candidate Server list in the existing migratable servers in the cluster is empty. It should be empty out-of-the-box because the Configuration Wizard leaves it empty. An empty candidate list means that all the servers in the cluster are candidates, which is the best practice.
    1. Go to each migratable server.
    2. Click the Migration tab and check the Constrained Candidate Servers list.
    3. Ensure that "Chosen" server list is empty. It should be empty out-of-the-box.
    4. If the server list is not empty, you should modify the list to make it blank. Or, if your list is not empty because you explicitly decided to constrain the migration to some specific servers only, modify it as per your preferences to accommodate the new server. Click Save and Commit Changes in the Shopping Cart. Notice that a change in the candidate list requires a restart of the existing servers in the cluster.
  13. Create the required persistent stores for the JMS servers used in the new server.
    1. Log into the WebLogic Remote Console. In the Edit Tree expand Services and select JDBC stores.
    2. Click New.

    Use the following table to create the required persistent stores:

    Note:

    The number in names and prefixes in the existing resources were assigned automatically by the Configuration Wizard during the domain creation. For example:

    • BPMJMSJDBCStore_auto_1 - soa_1

    • BPMJMSJDBCStore_auto_2 - soa_2

    • JDBCStore-OIM_auto_1 - oim1

    • JDBCStore-OIM_auto_2 - oim2

    • SOAJMSJDBCStore_auto_1 - soa_1

    • SOAJMSJDBCStore_auto_2 - soa_2

    • UMSJMSJDBCStore_auto_1 - soa_1

    • UMSJMSJDBCStore_auto_2 - soa_2

    So review the existing prefixes and select a new and unique prefix and name for each new persistent store.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the scaled tag and are shown here as an example.

    Table 18-5 The New Resources Qualified with the Scaled Tag

    Cluster to Scale Out Persistent Store Prefix Name Data Source Target

    SOA_Cluster

    UMSJMSJDBCStore_soa_scaled_3

    soaums_scaled_3

    WLSSchemaDataSourc

    soa_server3 (migratable)

    SOAJMSJDBCStore_ soa_scaled_3

    soajms_scaled_3

    WLSSchemaDataSourc

    soa_server3 (migratable)

    BPMJMSJDBCStore_ soa_scaled_3

    soabpm_scaled_3

    WLSSchemaDataSourc

    soa_server3 (migratable)

    OIM_Cluster

    N/A

    JDBCStore-OIM_scaled_3

    WLSSchemaDataSource

    oim_server3 (migratable)

  14. Create the required JMS Servers for the new managed server.
    1. Go to WebLogic Remote Console. In the Edit Tree, select Services, and then click JMS Servers.
    2. Click New.

    Use the following table to create the required JMS Servers. Assign to each JMS Server the previously created persistent stores:

    Note:

    The number in the names of the existing resources are assigned automatically by the Configuration Wizard during domain creation.

    So review the existing JMS server names and select a new and unique name for each new JMS server.

    To avoid naming conflicts and simplify the configuration, new resources are qualified with the product_scaled_N tag and are shown here as an example.

    Cluster to Scale Out JMS Server Name Persistent Store Target

    SOA_Cluster

    UMSJMSServer_soa_scaled_3

    UMSJMSJDBCStore_soa_scaled_3

    soa_server3 (migratable)

    SOAJMSServer_ soa_scaled_3

    SOAJMSJDBCStore_ soa_scaled_3

    soa_server3 (migratable)

    BPMJMSServer_ soa_scaled_3

    BPMJMSJDBCStore_ soa_scaled_3

    soa_server3 (migratable)

    (only when you use Insight) ProcMonJMSServer_soa_scaled_3

    ProcMonJMSJDBCStore_soa_scaled_3

    soa_server3 (migratable)

    OIM_Cluster

    OIMJMSServer_scaled_3

    JDBCStore-OIM_scaled_3

    oim_server3 (migratable)

  15. Update the SubDeployment Targets for JMS Modules (if applicable) to include the recently created JMS servers.
    1. Expand Services and select JMS Modules.
    2. Click the JMS module. For example: BPMJMSModule.

      Expand the Sub Deployments and select the corresponding one to update the targets, use the following table to identify the JMS modules to update, depending on the cluster that you are scaling out:

      Table 18-6 The JMS Modules to Update

      Cluster to Scale Out JMS Module to Update JMS Server to Add to the Subdeployment

      SOA_Cluster

      UMSJMSSystemResource *

      UMSJMSServer_soa_scaled_3

      SOAJMSModule

      SOAJMSServer_soa_scaled_3

      BPMJMSModule

      BPMJMSServer_soa_scaled_3

      (Only if you have configured Insight) ProcMonJMSModule *

      ProcMonJMSServer_soa_scaled_3

      OIM_Cluster

      OIMJMSModule

      OIMJMSServer_scaled_3

      (*) Some modules (UMSJMSystemResource) may be targeted to more than one cluster. Ensure that you update the appropriate subdeployment in each case.
    3. Add the corresponding JMS Server to the existing subdeployment.

      Note:

      The subdeployment module name is a random name in the form of SOAJMSServerXXXXXX, UMSJMSServerXXXXXX, or BPMJMSServerXXXXXX, resulting from the Configuration Wizard JMS configuration for the first two servers (soa_server1 and soa_server2).
    4. Click Save and Commit Changes in the Shopping Cart.
  16. Start the Node Manager in the Managed Server Domain Directory on OIGHOST3. Follow these steps to update and start the Node Manager from the Managed Server home
    1. Verify that the listen address in the nodemanager.properties file is set correctly, by completing the following steps
      • Change directory to the MSERVER_HOME binary directory:

        cd $IGD_MSERVER_HOME/nodemanager

      • Open the nodemanager.properties file for editing.

      • Validate the ListenAddress property to the correct hostname as follows:

        oighost3: ListenAddress=oighost3

      • Update the ListenPort property with the correct Listen Port details.

      • Make sure that QuitEnabled is set to true. If this line is not present in the nodemanager.properties file, add the following line:

        QuitEnabled=true

    2. Change directory to the MSERVER_HOME binary directory:
      cd $IGD_MSERVER_HOME/bin
    3. Use the following command to start the Node Manager:

      nohup ./startNodeManager.sh > $MSERVER_HOME/nodemanager/nodemanager.out 2>&1 &

  17. The configuration is finished. Now sign in to the new host and run the pack command to create a template pack, as follows:
    cd $ORACLE_COMMON_HOME/common/bin
    ./pack.sh -managed=true \
    -domain=$IGD_ASERVER_HOME \
    -template=/full_path/scaleout_domain.jar \
    -template_name=scaleout_domain_template \
    -log=/tmp/pack_scaleout.log \
    -log_priority=debug

    In this example:

    • Replace $IGD_ASERVER_HOME with the actual path to the domain directory that you created on the shared storage device.

    • Replace full_path with the complete path to the location where you want to create the domain template jar file. You need to reference this location when you copy or unpack the domain template jar file. Oracle recommends that you choose a shared volume other than ORACLE_HOME, or write to /tmp/ and copy the files manually between servers.

      You must specify a full path for the template jar file as part of the -template argument to the pack command:
      $SHARED_CONFIG_DIR/domains/template_filename.jar
    • scaleout_domain.jar is a sample name for the jar file that you are creating, which contains the domain configuration files.

    • scaleout_domain_template is the label that is assigned to the template data stored in the template file.

  18. Run the unpack command on OIGHOSTN to unpack the template in the managed server domain directory, as follows:
    cd $ORACLE_COMMON_HOME/common/bin
    ./unpack.sh -domain=$IGD_MSERVER_HOME \
    -overwrite_domain=true \
    -template=/full_path/scaleout_domain.jar \
    -log_priority=DEBUG \
    -log=/tmp/unpack.log \
    -app_dir=$APPLICATION_HOME

    In this example:

    • Replace $IGD_MSERVER_HOME with the complete path to the domain home to be created on the local storage disk. This is the location where the copy of the domain is unpacked.

    • Replace /full_path/scaleout_domain.jar with the complete path and file name of the domain template jar file that you created when you ran the pack command to pack up the domain on the shared storage device

    • Replace $APPLICATION_HOME with the complete path to the Application directory for the domain on shared storage. See File System and Directory Variables Used in This Guide.

  19. When scaling out the OAM_Cluster:

    Register the new Managed Server with Oracle Access Management by doing the following:

    1. Log in to the Access Management console using the URL outlined in URLs Used in This Chapter.

    2. Go to the System Configuration tab.

    3. Click Server Instances.

    4. Select Create from the Actions menu.

    5. Enter the following information:

      • Server Name: oam_server3

      • Host: Host that the server runs on

      • Port: Listen port that was assigned when the Managed Server was created

      • OAM Proxy Port: Port you want the Access Manager proxy to run on. This is unique for the host

      • Proxy Server ID: AccessServerConfigProxy

      • Mode: Set to same mode as existing Access Manager servers.

    6. Go to the Coherence tab, and set Local Port to a unique value on the host.

    7. Click Apply.

    8. Restart the WebLogic Administration Server.

    Add the newly created Access Manager server to all of the WebGate Profiles that might be using it, such as Webgate_IDM, and IAMSuiteAgent

    For example, to add the Access Manager server to Webgate_IDM, access the Access Management console using the URL outlined in URLs Used in This Chapter and do the following:

    1. Log in as the Access Manager Administrative User.

    2. Go to the System Configuration tab.

    3. Expand Access Manager Settings - SSO Agents - OAM Agents.

    4. Click the open folder icon, then click Search. You should see the WebGate agent Webgate_IDM.

    5. Click the agent Webgate_IDM.

    6. Select Edit from the Actions menu.

    7. Click + in the Primary Server list (or the Secondary Server list if this is a secondary server).

    8. Select the newly created managed server from the Server list.

    9. Set Maximum Number of Connections to 10.

    10. Click Apply.

    11. Repeat the steps for all of the WebGate that are in use, and restart the new Managed Server.

  20. Start Node Manager on the new host.
    cd $NM_HOME
    nohup ./startNodeManager.sh > ./nodemanager.out 2>&1 &
  21. Start the new managed server.
  22. Update the web tier configuration to include the new server. If you are using OHS, there is no need to add the new server to OHS. By default, the Dynamic Server List is used, which means that the list of servers in the cluster is automatically updated when a new node becomes part of the cluster. So, adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant server:port combinations to guarantee the initial contact in case of a partial outage.

    If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server is up, update the WebLogicCluster directive to include the new server.

    For example:

    <Location /oam>
     WLSRequest ON
     WLCookieName OAMJSESSIONID
     WebLogicCluster WebLogicCluster oamhost1.example.com:14101,oamhost2.example.com:14101,oamhost2.example.com:14101
    </Location>

Verifying the Scale Out

After scaling out and starting the server, proceed with the following verifications:
  1. Verify the correct routing to web applications.

    For example:

    1. Access the application on the load balancer:
      For SSL Terminated:
      http://igdinternal.example.com:7777/soa-infra
      For End to End SSL:
      https://igdinternal.example.com/soa-infra
    2. Check that there is activity in the new server also:
      In the Remote Console, go to Monitoring Tree and navigate to Deployments > Application Runtime Data > soa-infra.
    3. You can also verify that the web sessions are created in the new server:
      • In Remote Console, go to Monitoring Tree and navigate to Deployments > Application Runtime Data > soa-infra.

      • Go to Component Runtimes and click <soa_server3_/soa-infra.

      • Verify if there are sessions.

      You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:

      Cluster to Verify Sample URL to Test Web Application Module

      SOA_Cluster

      http://igdinternal.example.com:7777/soa-infra

      soa-infra > soa-infra

      OAM_Cluster

      http://login.example.com/oam

      AMA_Cluster

      http://iadadmin.example.com/access

      OIM_Cluster

      https://oig.example.com/identity

  2. Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
    1. In the Remote Console, go to the Monitoring Tree.
    2. Navigate to Dashboards > JMS Destinations.
  3. Verify the service migration, as described in Validating Automatic Service Migration.

Scaling in the Topology

This section describes how to scale in the topology for a cluster.

Perform the following steps to scale in the topology for a cluster:

  1. Stop the managed server that you want to delete.
  2. If you are using automatic service migration, verify that the resources corresponding to that server are not present in the remaining servers before you shrink and scale in the cluster. In case of using exactly-once migration policies, stop all the servers
  3. Use the OAM Console to remove OAM Servers from the webgate configuration (If removing OAM Servers).
    1. Log in to the OAM console using the URL in URLs Used in This Chapter.
    2. Go to the System Configuration tab.
    3. Click Server Instances.
    4. Select the server instances you wish to remove and select delete from the actions menu.
    5. Click Apply.
  4. Use the Oracle WebLogic Remote Console to delete the new server:
    1. Click Edit Tree.
    2. Go to Environment > Servers.
    3. Select the server that you want to delete.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.

    Note:

    If migratable target was not deleted in the previous step, you get the following error message:

    The following failures occurred: --MigratableTargetMBean oam_server3_soa-failure-recovery (migratable) does not have a preferred server set.
    Errors must be corrected before proceeding.
  5. Use the Oracle WebLogic Remote Console to update the subdeployment of each JMS Module that is used by the cluster that you are shrinking.

    Use the following table to identify the module for each cluster and perform this action for each module:

    Cluster to Scale in Persistent Store JMS Server to Delete from the Subdeployment

    SOA_Cluster

    UMSJMSSystemResource

    SOAJMSModule

    BPMJMSModule

    UMSJMSServer_soa_scaled_3

    SOAJMSServer_soa_scaled_3

    BPMJMSServer_soa_scaled_3

    OIM_Cluster

    OIMJMSModule

    OIMJMSServer_scaled_3

    1. Click Edit Tree.
    2. Go to Services > JMS System Resources.
    3. Click the JMS module.
    4. Click Sub Deployment.
    5. Select the Sub Deployment Module
    6. Unselect the JMS server that was created for the deleted server.
    7. Click Save and Commit changes in the Shopping Cart.
  6. Use the Oracle WebLogic Remote Console to delete the JMS servers:
    1. Click Edit Tree.
    2. Go to Services > JMS Servers.
    3. Select the JMS Servers that you created for the new server.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.
  7. Use the Oracle WebLogic Remote Console to delete the JMS persistent stores:
    1. Click Edit Tree.
    2. Go to Services > JDBC Stores.
    3. Select the JDBC Store that you created for the new server.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.
  8. If the machine that was hosting the deleted server is not used by any other servers you must delete it performing the following steps:
    1. Click Edit Tree.
    2. Go to Environment > Machines.
    3. Select the machine that you created for the new server.
    4. Click Delete.
    5. Click Save and Commit changes in the Shopping Cart.
  9. Update the Web tier configuration to remove references to the deleted server.