- Enterprise Deployment Guide for Oracle SOA Suite
- Common Configuration and Management Procedures for an Enterprise Deployment
- Scaling Procedures for an Enterprise Deployment
22 Scaling Procedures for an Enterprise Deployment
The scaling procedures for an enterprise deployment include scale out, scale in, scale up, and scale down. During a scale-out operation, you add managed servers to new nodes. You can remove these managed servers by performing a scale in operation. During a scale-up operation, you add managed servers to existing hosts. You can remove these servers by performing a scale-down operation.
This chapter describes the procedures to scale out/in and scale up/down static and dynamic clusters.
- Scaling Out the Topology
When you scale out the topology, you add new managed servers to new nodes. - Scaling in the Topology
When you scale in the topology, you remove managed servers that were added to new hosts. - Scaling Up the Topology
When you scale up the topology, you add new managed servers to the existing hosts. - Scaling Down the Topology
When you scale down the topology, you remove the managed servers that were added to the existing hosts.
Scaling Out the Topology
When you scale out the topology, you add new managed servers to new nodes.
This section describes the procedures to scale out the SOA topology with static and dynamic clusters.
Parent topic: Scaling Procedures for an Enterprise Deployment
Scaling Out the Topology for Static Clusters
This section lists the prerequisites, explains the procedure to scale out the topology with static clusters, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink).
- Prerequisites for Scaling Out
- Scaling Out a Static Cluster
- Verifying the Scale Out of Static Clusters
Parent topic: Scaling Out the Topology
Prerequisites for Scaling Out
Before you perform a scale out of the topology, you must ensure that you meet the following requirements:
-
The starting point is a cluster with managed servers already running.
-
The new node can access the existing home directories for WebLogic Server and SOA. Use the existing installations in shared storage. You do not need to install WebLogic Server or SOA binaries in a new location. However, you do need to run
pack
andunpack
commands to bootstrap the domain configuration in the new node. -
It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.
Parent topic: Scaling Out the Topology for Static Clusters
Scaling Out a Static Cluster
WLS_XYZn
is the generic name given to the new managed server that you
add to the cluster. Depending on the cluster that is being extended and the number of existing
nodes, the actual names are WLS_SOA3
, WLS_OSB3
,
WLS_ESS3
, and so on.
The scale-out procedure does not require downtime if the Candidate Server lists are empty in the existing servers and migratable targets. Using empty candidate lists is the best practice because it means that all the servers in the cluster are candidates for migration.
If you have created your environment following the Enterprise Deployment Guide for release 12.2.1.4, these lists are empty out-of-the-box. When you add a new server to the cluster, the server is automatically considered for migration without the need to restart the existing servers.
If you had decided to constraint the migration to some specific servers of the cluster only, your Candidate Server lists will not be empty. When you add a new server to the cluster, you may need to modify them to add the new server. In this case, you will have to restart the existing nodes during the scale-out process.
To scale out the cluster, complete the following steps:
- On the new node, mount the existing FMW Home, which should include the SOA installation and the domain directory. Ensure that the new node has access to this directory, similar to the rest of the nodes in the domain.
- Locate the inventory in the shared directory (for example,
/u01/oracle/products/oraInventory
), per Oracle’s recommendation. So you do not need to attach any home, but you may want to execute the script:/u01/oracle/products/oraInventory/createCentralInventory.sh
.This command creates and updates the local file
/etc/oraInst.loc
in the new node to point it to the oraInventory location.If there are other inventory locations in the new host, you can use them, but
/etc/oraInst.loc
file must be updated accordingly for updates in each case. - Update the
/etc/hosts
files to add the alias SOAHOSTn for the new node, as described in Verifying IP Addresses and Host Names in DNS or Hosts File.For example:
10.229.188.204 host1-vip.example.com host1-vip ADMINVHN 10.229.188.205 host1.example.com host1 SOAHOST1 10.229.188.206 host2.example.com host2 SOAHOST2 10.229.188.207 host3.example.com host3 WEBHOST1 10.229.188.208 host4.example.com host4 WEBHOST2 10.229.188.209 host5.example.com host5 SOAHOST3
- Configure a per host node manager in the new node, as described in Creating a Per Host Node Manager Configuration.
- Log in to the Oracle WebLogic Administration Console to create a new machine:
- Go to Environment and select Machines.
- Click New to create a new machine for the new node.
- Set Name to SOAHOSTn (or MFTHOSTn or BAMHOSTn).
- Set Machine OS to Linux.
- Click Next.
- Set Type to Plain.
- Set Listen Address to SOAHOSTn.
- Click Finish, and then click Activate Changes.
- Use the Oracle WebLogic Server Administration Console to clone the first managed server in
the cluster into a new managed server.
- In the Change Center section, click Lock & Edit.
- Go to Environment and select Servers.
- Select the first managed server in the cluster to scale out and click Clone.
- Use Table 22-1 to set the correspondent name, listen address, and listen port, depending on the cluster that you want to scale out.
- Click the new managed server, select Configuration, and then click General.
- Update the Machine from SOAHOST1 to SOAHOSTn.
- Click Save, and then click Activate Changes.
Table 22-1 Details of the Cluster to be Scaled Out
Cluster to Scale Out Server to Clone New Server Name Server Listen Address Server Listen Port WSM-PM_Cluster
WLS_WSM1
WLS_WSM3
SOAHOST3
7010
SOA_Cluster
WLS_SOA1
WLS_SOA3
SOAHOST3
8001
ESS_Cluster
WLS_ESS1
WLS_ESS3
SOAHOST3
8021
OSB_Cluster
WLS_OSB1
WLS_OSB3
SOAHOST3
8011
BAM_Cluster
WLS_BAM1
WLS_BAM3
SOAHOST3
9001
MFT_Cluster
WLS_MFT1
WLS_MFT3
MFTHOST3
7500
- Update the deployment Staging Directory Name of the new server, as described in Modifying the Upload and Stage Directories to an Absolute Path.
- Create a new key certificate and update the private key alias of the server, as described in Enabling SSL Communication Between the Middle Tier and the Hardware Load Balancer.
- By default, the cloned server uses default store for TLOGs. If the rest of the servers in
the cluster that you are scaling-out are using TLOGs in JDBC persistent store, update the TLOG
persistent store of the new managed server:
- Go to Environment and select Servers. From the list of servers, select WLS_XYZn , click the Configuration tab, and then click Services.
- Expand Advanced.
- Change Transaction Log Store to JDBC.
- Change Data Source to WLSSchemaDatasource.
- Click Save, and then click Activate Changes.
Use the following table to identify the clusters that use JDBC TLOGs by default:
Table 22-2 The Name of Clusters that Use JDBC TLOGs by Default
Cluster to Scale Out New Server Name TLOG Persistent Store WSM-PM_Cluster
WLS_WSM3
Default (file)
SOA_Cluster
WLS_SOA3
JDBC
ESS_Cluster
WLS_ESS3
Default (file)
OSB_Cluster
WLS_OSB3
JDBC
BAM_Cluster
WLS_BAM3
JDBC
MFT_Cluster
WLS_MFT3
JDBC
- If the cluster that you are scaling out is configured for automatic service migration,
update the JTA Migration Policy to the required value.
- Go to Environment and select Servers. From the list of servers, select WLS_XYZn , click the Configuration tab, and then click the Migration tab.
- Use Table 22-3 to set the recommended JTA Migration Policy depending on the cluster that you want to
scale out.
Table 22-3 The Recommended JTA Migration Policy for the Cluster to be Scaled Out
Cluster to Scale Out New Server Name JTA Migration Policy WSM-PM_Cluster
WLS_WSM3
Manual
SOA_Cluster
WLS_SOA3
Failure Recovery
ESS_Cluster
WLS_ESS3
Manual
OSB_Cluster
WLS_OSB3
Failure Recovery
BAM_Cluster
WLS_BAM3
Failure Recovery
MFT_Cluster
WLS_MFT3
Failure Recovery
- Click Save, and then click Activate Changes.
- In the servers already existing in the cluster, verify that the list of the JTA candidate
servers for JTA migration is empty:
- Click Environment and select Servers.
- From the Summary of Servers in the environment, select a server.
- Select the Configuration tab, and then click the Migration tab.
- Check the JTA Candidate Servers list and verify that the list is empty (an empty list indicates that all the servers in the cluster are JTA candidate servers). The list should be empty out-of-the-box so no changes are needed.
- If the server list is not empty, you should modify the list to make it blank. Or, if your list is not empty because you explicitly decided to constrain the migration to some specific servers only, modify it as per your preferences to accommodate the new server. Save and activate the changes. Restart the existing servers for this change to become effective.
- If the cluster you are scaling out is configured for automatic service migration, use the
Oracle WebLogic Server Administration Console to update the automatically created WLS_XYZn
(migratable) with the recommended migration policy, because by default it is set to
Manual Service Migration Only.
Use the following table for the list of migratable targets to update:
Table 22-4 The Recommended Migratable Targets to Update
Cluster to Scale Out Migratable Target to Update Migration Policy WSM-PM_Cluster
NA
NA
SOA_Cluster
WLS_SOA3 (migratable)
Auto-Migrate Failure Recovery Services
ESS_Cluster
NA
NA
OSB_Cluster
WLS_OSB3 (migratable)
Auto-Migrate Failure Recovery Services
BAM_Cluster
WLS_BAM3 (migratable)
Auto-Migrate Exactly-Once Services
MFT_Cluster
WLS_MFT3 (migratable)
Auto-Migrate Failure Recovery Services
- Go to Environment, select Clusters, and then click Migratable Servers.
- Click Lock & Edit.
- Click WLS_XYZ3 (migratable).
- Select the Configuration tab and click Migration.
- Change the Service Migration Policy to the value listed in the table.
- Leave the Constrained Candidate Server list blank in case there are chosen servers. If no servers are selected, you can migrate this migratable target to any server in the cluster.
- Click Save, and then click Activate Changes.
- For components that use multiple migratable targets, in addition to step 11, we have to
create another migratable target. BAM is used here as an example: use the Oracle WebLogic
Server Administration Console to clone WLS_BAM3 (migratable) into a new migratable
target.
- Click Environment, select > Clusters, and then click Migratable Servers.
- Click Lock & Edit.
- Click WLS_BAM3 (migratable) and click Clone.
- Name the new target WLS_BAM3_bam-exactly-once (migratable).
- Click the new migratable server.
- Click the Configuration tab and select Migration.
- If not set, change the Service Migration Policy to Auto-Migrate Exactly-Once Services.
- Leave the Constrained Candidate Server list blank. If no servers are selected, you can migrate this migratable target to any server in the cluster.
- Click Save, and then click Activate Changes.
- Verify that the Constrained Candidate Server list in the existing
migratable servers in the cluster is empty. It should be empty out-of-the-box because the
Configuration Wizard leaves it empty. An empty candidate list means that all the servers in the
cluster are candidates, which is the best practice.
- Go to each migratable server.
- Select the Configuration tab, click Migration, and then select Constrained Candidate Server.
- Ensure that server list is empty. It should be empty out-of-the-box.
- If the server list is not empty, you should modify the list to make it blank. Or, if your list is not empty because you explicitly decided to constrain the migration to some specific servers only, modify it as per your preferences to accommodate the new server. Save and activate the changes. Restart the existing servers for this change to become effective.
- Create the required persistent stores for the JMS servers.
- Log in to WebLogic Console and go to Services and select Persistent Stores.
- Click New and select Create JDBCStore.
Use the following table to create the required persistent stores:
Note:
The number in names and prefixes in the existing resources were assigned automatically by the Configuration Wizard during the domain creation. For example:
-
UMSJMSJDBCStore_auto_1 — soa_1
-
UMSJMSJDBCStore_auto_2 — soa_2
-
BPMJMSJDBCStore_auto_1 — soa_3
-
BPMJMSJDBCStore_auto_2 — soa_4
-
SOAJMSJDBCStore_auto_1 — soa_5
-
SOAJMSJDBCStore_auto_2 — soa_6
So review the existing prefixes and select a new and unique prefix and name for each new persistent store.
To avoid naming conflicts and simplify the configuration, new resources are qualified with the scaled tag and are shown here as an example.
Table 22-5 The New Resources Qualified with the Scaled Tag
Cluster to Scale Out Persistent Store Prefix Name Data Source Target WSM-PM_Cluster
NA
NA
NA
NA
SOA_Cluster
UMSJMSJDBCStore_soa_scaled_3
soaums_scaled_3
WLSSchemaDataSourc
WLS_SOA3 (migratable)
SOAJMSJDBCStore_ soa_scaled_3
soajms_scaled_3
WLSSchemaDataSourc
WLS_SOA3 (migratable)
BPMJMSJDBCStore_ soa_scaled_3
soabpm_scaled_3
WLSSchemaDataSourc
WLS_SOA3 (migratable)
ESS_Cluster
NA
NA
NA
NA
OSB_Cluster
UMSJMSJDBCStore_osb_scaled_3
osbums_scaled_3
WLSSchemaDataSourc
WLS_OSB3 (migratable)
OSBJMSJDBCStore_osb_scaled_3
osbjms_scaled_3
WLSSchemaDataSourc
WLS_OSB3 (migratable)
BAM_Cluster
UMSJMSJDBCStore_bam_scaled_3
bamums_scaled_3
WLSSchemaDataSourc
WLS_BAM3_bam-exactly-once (migratable)
BamPersistenceJmsJDBCStore_bam_scaled_3
bamP_scaled_3
WLSSchemaDataSourc
WLS_BAM3_bam-exactly-once (migratable)
BamReportCacheJmsJDBCStore_bam_scaled_3
bamR_scaled_3
WLSSchemaDataSourc
WLS_BAM3_bam-exactly-once (migratable)
BamAlertEngineJmsJDBCStore_bam_scaled_3
bamA_scaled_3
WLSSchemaDataSourc
WLS_BAM3_bam-exactly-once (migratable)
BamJmsJDBCStore_bam_scaled_3
bamjms_scaled_3
WLSSchemaDataSourc
WLS_BAM3_bam-exactly-once (migratable)
BamCQServiceJmsJDBCStore_bam_scaled_3
bamC_scaled_3
WLSSchemaDataSourc
WLS_BAM3*
MFT_Cluster
MFTJMSJDBCStore_mft_scaled_3
mftjms_scaled_3
WLSSchemaDataSourc
WLS_MFT3 (migratable)
Note:
(*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets. - Create the required JMS Servers for the new managed server.
- Go to WebLogic Console, select Services, click Messaging, and then click JMS Servers.
- Click Lock & Edit.
- Click New.
Use the following table to create the required JMS Servers. Assign to each JMS Server the previously created persistent stores:
Note:
The number in the names of the existing resources are assigned automatically by the Configuration Wizard during domain creation.
So review the existing JMS server names and select a new and unique name for each new JMS server.
To avoid naming conflicts and simplify the configuration, new resources are qualified with the product_scaled_N tag and are shown here as an example.
Cluster to Scale Out JMS Server Name Persistent Store Target WSM-PM_Cluster
NA
NA
NA
SOA_Cluster
UMSJMSServer_soa_scaled_3
UMSJMSJDBCStore_soa_scaled_3
WLS_SOA3 (migratable)
SOAJMSServer_ soa_scaled_3
SOAJMSJDBCStore_ soa_scaled_3
WLS_SOA3 (migratable)
BPMJMSServer_ soa_scaled_3
BPMJMSJDBCStore_ soa_scaled_3
WLS_SOA3 (migratable)
ESS_Cluster
NA
NA
NA
OSB_Cluster
UMSJMSServer_osb_scaled_3
UMSJMSJDBCStore_osb_scaled_3
WLS_OSB3 (migratable)
wlsbJMSServer_osb_scaled_3
OSBJMSJDBCStore_osb_scaled_3
WLS_OSB3 (migratable)
BAM_Cluster
UMSJMSServer_bam_scaled_3
UMSJMSJDBCStore_bam_scaled_3
WLS_BAM3_bam-exactly-once (migratable)
BamPersistenceJmsServer_bam_scaled_3
BamPersistenceJmsJDBCStore_bam_scaled_3
WLS_BAM3_bam-exactly-once (migratable)
BamReportCacheJmsServer_bam_scaled_3
BamReportCacheJmsJDBCStore_bam_scaled_3
WLS_BAM3_bam-exactly-once (migratable)
BamAlertEngineJmsServer_bam_scaled_3
BamAlertEngineJmsJDBCStore_bam_scaled_3
WLS_BAM3_bam-exactly-once (migratable)
BAMJMSServer_bam_scaled_3
BamJmsJDBCStore_bam_scaled_3
WLS_BAM3_bam-exactly-once (migratable)
BamCQServiceJmsServer_bam_scaled_3
BamCQServiceJmsJDBCStore_bam_scaled_3
WLS_BAM3*
MFT_Cluster
MFTJMSServer_mft_scaled_3
MFTJMSJDBCStore_mft_scaled_3
WLS_MFT3 (migratable)
Note:
(*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets. - Update the SubDeployment Targets for JMS Modules (if applicable) to include the recently
created JMS servers.
- Expand Services, select Messaging, and then click JMS Modules.
- Click the JMS module. For example:
BPMJMSModule
.Use the following table to identify the JMS modules to update, depending on the cluster that you are scaling out:
Table 22-6 The JMS Modules to Update
Cluster to Scale Out JMS Module to Update JMS Server to Add to the Subdeployment WSM-PM_Cluster
NA
NA
SOA_Cluster
UMSJMSSystemResource *
UMSJMSServer_soa_scaled_3
SOAJMSModule
SOAJMSServer_soa_scaled_3
BPMJMSModule
BPMJMSServer_soa_scaled_3
ESS_Cluster
NA
NA
OSB_Cluster
UMSJMSSystemResource *
UMSJMSServer_osb_scaled_3
jmsResources (scope Global)
wlsbJMSServer_osb_scaled_3
BAM_Cluster
BamPersistenceJmsSystemModule
BamPersistenceJmsServer_bam_scaled_3
BamReportCacheJmsSystemModule
BamReportCacheJmsServer_bam_scaled_3
BamAlertEngineJmsSystemModule
BamAlertEngineJmsServer_bam_scaled_3
BAMJMSSystemResource
BAMJMSServer_bam_scaled_3
BamCQServiceJmsSystemModule
N/A (no subdeployment)
UMSJMSSystemResource *
UMSJMSServer_bam_scaled_3
MFT_Cluster
MFTJMSModule
MFTJMSServer_mft_scaled_3
(*) Some modules (UMSJMSystemResource, ProcMonJMSModule) may be targeted to more than one cluster. Ensure that you update the appropriate subdeployment in each case. - Click Configuration and select Subdeployment.
- Add the corresponding JMS Server to the existing subdeployment.
Note:
The subdeployment module name is a random name in the form ofSOAJMSServerXXXXXX
,UMSJMSServerXXXXXX
, orBPMJMSServerXXXXXX
, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1
andWLS_SOA2
). - Click Save, and then click Activate Changes.
- In case you are scaling out a BAM cluster, you need to create some local queues for the new
server in the
BamCQServiceJmsSystemModule
module. Follow these steps to create them:- Go to WebLogic Console, select Services, click Messaging, and then JMS Modules.
- Click Lock & Edit.
- Click in BamCQServiceJmsSystemModule.
- Click Targets.
- Add WLS_BAM3 to the targets and click Save.
- Click New.
- Select Queue and click Next.
- Name it BamCQServiceAlertEngineQueue_auto_3 and click Next.
- Create a new Subdeployment with the target
BamCQServiceJmsServer_bam_scaled_3
and select it for the queue. - Click Finish.
- Click in the newly created queue
BamCQServiceAlertEngineQueue_auto_3
- Go to Configuration, select General, and then click Advanced.
- Set Local JNDI Name to queue/oracle.beam.cqservice.mdbs.alertengine.
- Click Save.
- Repeat these steps to create the other queue
BamCQServiceReportCacheQueue_auto_3
with the information in Table 22-7. - After you finish, you have these new local queues.
Table 22-7 Information to Create the Local Queues
Name Type Local JNDI Name Subdeployment BamCQServiceAlertEngineQueue_auto_3
Queue
queue/oracle.beam.cqservice.mdbs.alertengine
BamCQServiceJmsServer_auto_3
BamCQServiceReportCacheQueue_auto_3
Queue
queue/oracle.beam.cqservice.mdbs.reportcache
BamCQServiceJmsServer_auto_3
- Click Activate Changes.
- The configuration is finished. Now sign in to
SOAHOST1
and run the pack command to create a template pack, as follows:cd ORACLE_COMMON_HOME/common/bin ./pack.sh -managed=true -domain=ASERVER_HOME -template=/full_path/scaleout_domain.jar -template_name=scaleout_domain_template -log_priority=DEBUG -log=/tmp/pack.log
In this example:
-
Replace ASERVER_HOME with the actual path to the domain directory that you created on the shared storage device.
-
Replace full_path with the complete path to the location where you want to create the domain template jar file. You need to reference this location when you copy or unpack the domain template jar file. Oracle recommends that you choose a shared volume other than ORACLE_HOME, or write to
/tmp/
and copy the files manually between servers.You must specify a full path for the template jar file as part of the-template
argument to thepack
command:SHARED_CONFIG_DIR/domains/template_filename.jar
-
scaleout_domain.jar
is a sample name for the jar file that you are creating, which contains the domain configuration files. -
scaleout_domain_template
is the label that is assigned to the template data stored in the template file.
-
- Run the
unpack
command onSOAHOSTN
to unpack the template in the managed server domain directory, as follows:cd ORACLE_COMMON_HOME/common/bin ./unpack.sh -domain=MSERVER_HOME -overwrite_domain=true -template=/full_path/scaleout_domain.jar -log_priority=DEBUG -log=/tmp/unpack.log -app_dir=APPLICATION_HOME
In this example:
-
Replace MSERVER_HOME with the complete path to the domain home to be created on the local storage disk. This is the location where the copy of the domain is unpacked.
-
Replace
/full_path/scaleout_domain.jar
with the complete path and file name of the domain template jar file that you created when you ran thepack
command to pack up the domain on the shared storage device -
Replace APPLICATION_HOME with the complete path to the Application directory for the domain on shared storage. See File System and Directory Variables Used in This Guide.
-
- When scaling out the SOA_Cluster:
- If BPM Web Forms are used, update the
startWebLogic.sh
customizations for BPM to include the new node, as explained in Updating SOA BPM Servers for Web Forms. - Update the
setDomain.sh
to includeappTrustKeyStore.jks
, as explained in Adding the Updated Trust Store to the Oracle WebLogic Server Start Scripts.
- If BPM Web Forms are used, update the
- When scaling out OSB_Cluster:
- Restart the Admin Server to see the new server in the Service Bus Dashboard.
- When scaling out MFT_Cluster:
- Default SFTP/FTP ports are used in the new server. If you are not using the defaults, configure the ports in the SFTP server as described in Configuring the SFTP Ports to configure the ports in the SFTP server.
- Start Node Manager on the new host.
cd $NM_HOME nohup ./startNodeManager.sh > ./nodemanager.out 2>&1 &
- Start the new managed server.
- Update the web tier configuration to include the new server:
- If you are using OTD, log in to Enterprise Manager and update the corresponding origin pool, as explained in Creating the Required Origin Server Pools to add the new server to the pool.
- If you are using OHS, there is no need to add the new server to OHS. By default, the
Dynamic Server List is used, which means that the list of servers in the cluster is
automatically updated when a new node becomes part of the cluster. So, adding it to the list
is not mandatory. The WebLogicCluster directive needs only a sufficient number of
redundant
server:port
combinations to guarantee the initial contact in case of a partial outage.If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server is up, update the
WebLogicCluster
directive to include the new server.For example:
<Location /osb> WLSRequest ON WebLogicCluster SOAHOST1:8011,SOAHOST2:8011,SOAHOST3:8011 WLProxySSL ON WLProxySSLPassThrough ON </Location>
Parent topic: Scaling Out the Topology for Static Clusters
Verifying the Scale Out of Static Clusters
- Verify the correct routing to web applications.
For example:
- Access the application on the load balancer:
soa.example.com
/soa-infra - Check that there is activity in the new server also:Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
- You can also verify that the web sessions are created in the new server:
-
Go to Cluster > Deployments.
-
Expand soa-infra, click soa-infra Web application.
-
Go to Monitoring to check the web sessions in each server.
You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:
Cluster to Verify Sample URL to Test Web Application Module WSM-PM_Cluster
http://soainternal.example.com/wsm-pm
wsm-pm > wsm-pm
SOA_Cluster
https://soa.example.com/soa-infra
soa-infra > soa-infra
ESS_Cluster
https://soa.example.com/ESSHealthCheck
ESSHealthCheck
OSB_Cluster
https://osb.example.com/sbinspection.wsil
Service Bus WSIL
MFT_Cluster
https://mft.example.com/mftconsole
mftconsole
BAM_Cluster
https://soa.example.com/bam/composer
BamComposer > /bam/composer
-
- Access the application on the load balancer:
- Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
- Go to JMS Servers.
- Click JMS Server > Monitoring.
- Verify the service migration, as described in Validating Automatic Service Migration in Static Clusters.
Parent topic: Scaling Out the Topology for Static Clusters
Scaling Out the Topology for Dynamic Clusters
This section lists the prerequisites, explains the procedure to scale out the topology with dynamic clusters, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink).
- Prerequisites for Scaling Out
- Scaling Out a Dynamic Cluster
- Verifying the Scale Out of Dynamic Clusters
Parent topic: Scaling Out the Topology
Prerequisites for Scaling Out
Before you perform a scale out of the topology, you must ensure that you meet the following requirements:
-
The starting point is a cluster with managed servers already running.
-
The new node can access the existing home directories for WebLogic Server and SOA. Use the existing installations in shared storage. You do not need to install WebLogic Server or SOA binaries in a new location. However, you do need to run
pack
andunpack
commands to bootstrap the domain configuration in the new node. -
It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.
Parent topic: Scaling Out the Topology for Dynamic Clusters
Scaling Out a Dynamic Cluster
WLS_XYZn
is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names are WLS_SOA3
, WLS_OSB3
, WLS_ESS3
, and so on.
To scale out the topology in a dynamic cluster, complete the following steps:
- On the new node, mount the existing shared volumes for FMW Home (NFS Volume1), shared config (NFS Volume 3), and runtime (NFS Volume 4), as described in Table 7-4.
- Locate the inventory in the shared directory (for example,
/u01/oracle/products/oraInventory
), per Oracle’s recommendation. So you do not need to attach any home, but you may want to execute the script:/u01/oracle/products/oraInventory/createCentralInventory.sh
.This command creates and updates the local file
/etc/oraInst.loc
in the new node to point it to theoraInventory
location.If there are other inventory locations in the new host, you can still use them, but
/etc/oraInst.loc
file must be updated accordingly for updates in each case. - Update the
/etc/hosts
files to add the aliasSOAHOSTN
for the new node, as described in Verifying IP Addresses and Host Names in DNS or Hosts File.For example:
10.229.188.204 host1-vip.example.com host1-vip ADMINVHN 10.229.188.205 host1.example.com host1 SOAHOST1 10.229.188.206 host2.example.com host2 SOAHOST2 10.229.188.207 host3.example.com host3 WEBHOST1 10.229.188.208 host4.example.com host4 WEBHOST2 10.229.188.209 host5.example.com host5 SOAHOST3
- Configure a per host Node Manager in the new node, as described in Creating a Per Host Node Manager Configuration.
- Log in to the Oracle WebLogic Administration Console to create a new machine for the new node.
- Update the machine's Node Manager address to map the IP of the node that is being used for scale out.
- Use the Oracle WebLogic Server Administration Console to increase the dynamic cluster to include a new managed server:
- Click Lock & Edit.
- Go to Domain > Environment > Clusters.
- Select the cluster to want to scale out.
- Go to Configuration > Servers.
- Set Dynamic Cluster Size to 3. By default, the cluster size is 2.
Note:
In case of scaling-out to more than three servers, we also need to update Number of servers in cluster Address that is 3 by default. Although Oracle recommends you to use the cluster syntax for t3 calls, the cluster address is used if calling from external elements via t3, for EJB stubs, and so on.
- Sign in to SOAHOST1 and run the pack command to create a template pack as follows:
cd ORACLE_COMMON_HOME/common/bin ./pack.sh -managed=true -domain=ASERVER_HOME -template=/full_path/scaleout_domain.jar -template_name=scaleout_domain_template -log_priority=DEBUG -log=/tmp/pack.log
In this example:
-
Replace ASERVER_HOME with the actual path to the domain directory that you created on the shared storage device.
-
Replace full_path with the complete path to the location where you want to create the domain template jar file. You need to reference this location when you copy or unpack the domain template jar file. Oracle recommends that you choose a shared volume other than ORACLE_HOME, or write to
/tmp/
and copy the files manually between servers.You must specify a full path for the template jar file as part of the-template
argument to thepack
command:SHARED_CONFIG_DIR/domains/template_filename.jar
-
scaleout_domain.jar
is a sample name for the jar file that you are creating, which contains the domain configuration files. -
scaleout_domain_template
is the label that is assigned to the template data stored in the template file.
-
- Run the
unpack
command onSOAHOSTN
to unpack the template in the managed server domain directory, as follows:cd ORACLE_COMMON_HOME/common/bin ./unpack.sh -domain=MSERVER_HOME -overwrite_domain=true -template=/full_path/scaleout_domain.jar -log_priority=DEBUG -log=/tmp/unpack.log -app_dir=APPLICATION_HOME
In this example:
-
Replace MSERVER_HOME with the complete path to the domain home to be created on the local storage disk. This is the location where the copy of the domain is unpacked.
-
Replace
/full_path/scaleout_domain.jar
with the complete path and file name of the domain template jar file that you created when you ran thepack
command to pack up the domain on the shared storage device -
Replace APPLICATION_HOME with the complete path to the Application directory for the domain on shared storage. See File System and Directory Variables Used in This Guide.
-
- When scaling out the SOA_Cluster:
- If BPM Web Forms are used, update the
startWebLogic.sh
customizations for BPM to include the new node, as explained in Updating SOA BPM Servers for Web Forms. - Update the
setDomain.sh
to includeappTrustKeyStore.jks
, as explained in Adding the Updated Trust Store to the Oracle WebLogic Server Start Scripts.
- If BPM Web Forms are used, update the
- When scaling out OSB_Cluster:
- Restart Admin Server to see the new server in the Service Bus Dashboard.
- When scaling out MFT_Cluster:
- Default SFTP/FTP ports will be used in the new server. If you are not using the defaults, follow the steps described in Configuring the SFTP Ports to configure the ports in the SFTP server.
- Start Node Manager on the new host.
cd $NM_HOME nohup ./startNodeManager.sh > ./nodemanager.out 2>&1 &
- Start the new managed Server.
- Update the web tier configuration to include this new server:
- If using OTD, log in to Enterprise Manager and update the corresponding origin pool, as explained in Creating the Required Origin Server Pools to add the new server to the pool.
- If using OHS, there is no need to add the new server to OHS.
By default, the Dynamic Server list is used, which means that the list of servers in the cluster is automatically updated when a new node becomes part of the cluster. So adding the new node to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant
server:port
combinations to guarantee the initial contact in the case of a partial outage.If there expected scenarios where the Oracle HTTP Server is restarted and only the new server would be up, update the WebLogicCluster directive to include the new server.
For example:
<Location /osb> WLSRequest ON WebLogicCluster SOAHOST1:8011,SOAHOST2:8012,SOAHOST3:8013 WLProxySSL ON WLProxySSLPassThrough ON </Location>
Parent topic: Scaling Out the Topology for Dynamic Clusters
Verifying the Scale Out of Dynamic Clusters
- Verify the correct routing to web applications.
For example:
- Access the application on the load balancer:
soa.example.com
/soa-infra - Check that there is activity in the new server also:Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
- You can also verify that the web sessions are created in the new server:
-
Go to Cluster > Deployments.
-
Expand soa-infra, click soa-infra Web application.
-
Go to Monitoring to check the web sessions in each server.
You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:
Cluster to Verify Sample URL to Test Web Application Module WSM-PM_Cluster
http://soainternal.example.com/wsm-pm
wsm-pm > wsm-pm
SOA_Cluster
https://soa.example.com/soa-infra
soa-infra > soa-infra
ESS_Cluster
https://soa.example.com/ESSHealthCheck
ESSHealthCheck
OSB_Cluster
https://osb.example.com/sbinspection.wsil
Service Bus WSIL
MFT_Cluster
https://mft.example.com/mftconsole
mftconsole
BAM_Cluster
https://soa.example.com/bam/composer
BamComposer > /bam/composer
-
- Access the application on the load balancer:
- Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
- Go to JMS Servers.
- Click JMS Server > Monitoring.
- Verify the service migration, as described in Validating Automatic Service Migration in Dynamic Clusters.
Parent topic: Scaling Out the Topology for Dynamic Clusters
Scaling in the Topology
When you scale in the topology, you remove managed servers that were added to new hosts.
Parent topic: Scaling Procedures for an Enterprise Deployment
Scaling in the Topology for Static Clusters
- To scale in the cluster without any JMS data loss, perform the steps described in Managing the JMS Messages in a SOA Server:
-
To drain the messages, see Draining the JMS Messages from a SOA Server.
-
To import the messages into another member of the cluster, see Importing the JMS Messages into a SOA Server.
After you complete the steps, continue with the scale-in procedure.
-
- Check the pending JTA. Before you shut down the server, review if there are any active JTA transactions in the server that you want to delete. Navigate to the WebLogic Console and click Environment > Servers > <server name> > Monitoring > JTA > Transactions
Note:
If you have used the Shutdown Recovery policy for JTA, the transactions are recovered in another server after you shut down the server.
- Shut down the server by using the When works completes option.
Note:
This operation can take long time if there are active HTTP sessions or long transactions in the server. For more information about graceful shutdown, see Using Server Life Cycle Commands in Administering Server Startup and Shutdown for Oracle WebLogic Server
- Use the Oracle WebLogic Server Administration Console to delete the migratable target that is used by the server that you want to delete.
- Click Lock & Edit.
- Go to Domain > Environment > Cluster > Migratable Target.
- Select the migratable target that you want to delete.
- Click Delete.
- Click Yes.
- Click Activate Changes.
- Use the Oracle WebLogic Server Administration Console to delete the new server:
- Click Lock & Edit.
- Go to Domain > Environment > Servers.
- Select the server that you want to delete.
- Click Delete.
- Click Yes.
- Click Activate Changes.
Note:
If migratable target was not deleted in the previous step, you get the following error message:
The following failures occurred: --MigratableTargetMBean WLS_SOA3_soa-failure-recovery (migratable) does not have a preferred server set. Errors must be corrected before proceeding.
- Use the Oracle WebLogic Server Administration Console to update the subdeployment of each JMS Module that is used by the cluster that you are shrinking.
Use the following table to identify the module for each cluster and perform this action for each module:
Cluster to Scale in Persistent Store JMS Server to Delete from the Subdeployment WSM-PM_Cluster
Not applicable
Not applicable
SOA_Cluster
UMSJMSSystemResource
SOAJMSModule
BPMJMSModule
UMSJMSServer_soa_scaled_3
SOAJMSServer_soa_scaled_3
BPMJMSServer_soa_scaled_3
ESS_Cluster
Not applicable
Not applicable
OSB_Cluster
UMSJMSSystemResource
jmsResources (scope Global)
UMSJMSServer_osb_scaled_3
wlsbJMSServer_osb_scaled_3
BAM_Cluster
BamPersistenceJmsSystemModule
BamReportCacheJmsSystemModule
BamAlertEngineJmsSystemModule
BAMJMSSystemResource
BamCQServiceJmsSystemModule
BamPersistenceJmsServer_bam_scaled_3
BamReportCacheJmsServer_bam_scaled_3
BamAlertEngineJmsServer_bam_scaled_3
BAMJMSServer_bam_scaled_3
Not applicable (no subdeployment)
MFT_Cluster
MFTJMSModule
MFTJMSServer_mft_scaled_3
- Click Lock & Edit.
- Go to Domain > Services > Messaging > JMS Modules.
- Click the JMS module.
- Click subdeployment.
- Unselect the JMS server that was created for the deleted server.
- Click Save.
- Click Activate Changes.
- In case you want to scale in a BAM cluster, use the Oracle WebLogic Server Administration Console to delete the local queues that are created for the new server:
- Click Lock & Edit.
- Go to WebLogic Console>Services>Messaging> JMS Modules.
- Click in
BamCQServiceJmsSystemModule
. - Delete the local queues that are created for the new server:
-
BamCQServiceAlertEngineQueue_auto_3
-
BamCQServiceReportCacheQueue_auto_3
-
- Click Activate Changes.
- Use the Oracle WebLogic Server Administration Console to delete the JMS servers:
- Click Lock & Edit.
- Go to Domain > Services > Messaging > JMS Servers.
- Select the JMS Servers that you created for the new server.
- Click Delete.
- Click Yes.
- Click Activate Changes.
- Use the Oracle WebLogic Server Administration Console to delete the JMS persistent stores:
- Click Lock & Edit.
- Go to Domain > Services > Persistent Stores.
- Select the Persistent Stores that you created for the new server.
- Click Delete.
- Click Yes.
- Click Activate Changes.
- Update the web tier configuration to remove references to the new server.
Parent topic: Scaling in the Topology
Scaling in the Topology for Dynamic Clusters
- To scale in the cluster without any JMS data loss, perform the steps described in Managing the JMS Messages in a SOA Server:
-
To drain the messages, see Draining the JMS Messages from a SOA Server.
-
To import the messages into another member of the cluster, see Importing the JMS Messages into a SOA Server.
After you complete the steps, continue with the scale-in procedure.
-
- Check the pending JTA. Before you shut down the server, review if there are any active JTA transactions in the server that you want to delete. Navigate to the WebLogic Console and click Environment > Servers > <server name> > Monitoring > JTA > Transactions.
Note:
If you have used the Shutdown Recovery policy for JTA, the transactions are recovered in another server after you shut down the server.
- Shut down the server by using the When works completes option.
Note:
-
This operation can take long time if there are active HTTP sessions or long transactions in the server. For more information about graceful shutdown, see Using Server Life Cycle Commands in Administering Server Startup and Shutdown for Oracle WebLogic Server
-
In Dynamic Clusters, the JMS servers that are running in the server that you want to delete, and use “Always” as the migration policy, are migrated to another member in the cluster at this point (its server was just shutdown). The next time you restart the member that hosts them, these JMS servers will not start because their preferred server is not present in the cluster anymore. But you must check if they get any new messages during this interim period because the messages could be lost. To preserve the messages, pause the production and export the messages from these JMS servers before you restart any server in the cluster.
-
- Use the Oracle WebLogic Server Administration Console to reduce the dynamic cluster:
- Click Lock & Edit.
- Go to Domain > Environment > Clusters.
- Select the cluster that you want to scale in.
- Go to Configuration > Servers.
- Set the Dynamic Cluster size to 2.
- If you are using OSB, restart the Admin Server.
Parent topic: Scaling in the Topology
Scaling Up the Topology
When you scale up the topology, you add new managed servers to the existing hosts.
This section describes the procedures to scale up the topology with static and dynamic clusters.
- Scaling Up the Topology for Static Clusters
This section lists the prerequisites, explains the procedure to scale up the topology with static clusters, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink). - Scaling Up the Topology for Dynamic Clusters
This section lists the prerequisites, explains the procedure to scale out the topology with dynamic clusters, describes the steps to verify the scale-up process, and finally the steps to scale down (shrink).
Parent topic: Scaling Procedures for an Enterprise Deployment
Scaling Up the Topology for Static Clusters
This section lists the prerequisites, explains the procedure to scale up the topology with static clusters, describes the steps to verify the scale-out process, and finally the steps to scale down (shrink).
You already have a node that runs a managed server that is configured with Fusion Middleware components. The node contains a WebLogic Server home and an Oracle Fusion Middleware SOA home in shared storage. Use these existing installations and domain directories, to create the new managed servers. You do not need to install WLS or SOA binaries or to run pack and unpack because the new server is going to run in the existing node.
Parent topic: Scaling Up the Topology
Prerequisites for Scaling Up
Before you perform a scale up of the topology, you must ensure that you meet the following requirements:
-
The starting point is a cluster with managed servers already running.
-
It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.
Parent topic: Scaling Up the Topology for Static Clusters
Scaling Up a Static Cluster
Use the SOA EDG topology as a reference, with two application tier hosts (SOAHOST1 and SOAHOST2), each running one managed server of each cluster. The example explains how to add a third managed server to the cluster that runs in SOAHOST1. WLS_XYZn
is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names are WLS_SOA3
, WLS_OSB3
, WLS_ESS3
, and so on.
The scale-up procedure does not require downtime if the Candidate Server lists are empty in the existing servers and migratable targets. Using empty candidate lists is the best practice because it means that all the servers in the cluster are candidates for migration.
If you have created your environment following the Enterprise Deployment Guide for release 12.2.1.4, these lists are empty out-of-the-box. When you add a new server to the cluster, the server is automatically considered for migration without the need to restart the existing servers.
If you had decided to constraint the migration to some specific servers of the cluster only, your Candidate Server lists will not be empty. When you add a new server to the cluster, you may need to modify them to add the new server. In this case, you will have to restart the existing nodes during the scale-up process.
To scale up the cluster, complete the following steps:
- Use the Oracle WebLogic Server Administration Console to clone the first managed server in the cluster into a new managed server.
- In the Change Center section, click Lock & Edit.
- Click Environment and select Servers.
- Select the first managed server in the cluster to scale up and click Clone.
- Use Table 22-8 to set the correspondent name, listen address, and listen port depending on the cluster that you want to scale out. Note that the default listen port is increment by 1 to avoid binding conflicts with the managed server that is already created and running in the same host.
- Click the new managed server, select Configuration, and then select General.
- Click Save, and then click Activate Changes.
Table 22-8 List of Clusters that You Want to Scale Up
Cluster to Scale Up Server to Clone New Server Name Server Listen Address Server Listen Port WSM-PM_Cluster
WLS_WSM1
WLS_WSM3
SOAHOST1
7011
SOA_Cluster
WLS_SOA1
WLS_SOA3
SOAHOST1
8002
ESS_Cluster
WLS_ESS1
WLS_ESS3
SOAHOST1
8022
OSB_Cluster
WLS_OSB1
WLS_OSB3
SOAHOST1
8012
BAM_Cluster
WLS_BAM1
WLS_BAM3
SOAHOST1
9002
MFT_Cluster
WLS_MFT1
WLS_MFT3
MFTHOST1
7501
- Update the deployment Staging Directory Name of the new server, as described in Modifying the Upload and Stage Directories to an Absolute Path.
- Create a new key certificate and update the private key alias of the server, as described in Enabling SSL Communication Between the Middle Tier and the Hardware Load Balancer.
- By default, the cloned server uses default store for TLOGs. If the rest of the servers in the cluster that you are scaling-out are using TLOGs in JDBC persistent store, update the TLOG persistent store of the new managed server:
Use the following table to identify the clusters that use JDBC TLOGs by default:
Table 22-9 The Name of Clusters that Use JDBC TLOGs by Default
Cluster to Scale Up New Server Name TLOG Persistent Store WSM-PM_Cluster
WLS_WSM3
Default (file)
SOA_Cluster
WLS_SOA3
JDBC
ESS_Cluster
WLS_ESS3
Default (file)
OSB_Cluster
WLS_OSB3
JDBC
BAM_Cluster
WLS_BAM3
JDBC
MFT_Cluster
WLS_MFT3
JDBC
Complete the following steps- Go to Environment and select Servers. From the list, select WLS_XYZn, click the Configuration tab, and then select the Services tab.
- Expand Advanced.
- Change Transaction Log Store to JDBC.
- Change Data Source to WLSSchemaDatasource.
- Click Save, and then click Activate Changes.
- If the cluster you are scaling up is configured for automatic service migration, update the JTA Migration Policy to the required value.
Use the following table to identify the clusters for which you have to update the JTA Migration Policy:
Table 22-10 The Recommended JTA Migration Policy for the Cluster to be Scaled Up
Cluster to Scale Up New Server Name JTA Migration Policy WSM-PM_Cluster
WLS_WSM3
Manual
SOA_Cluster
WLS_SOA3
Failure Recovery
ESS_Cluster
WLS_ESS3
Manual
OSB_Cluster
WLS_OSB3
Failure Recovery
BAM_Cluster
WLS_BAM3
Failure Recovery
MFT_Cluster
WLS_MFT3
Failure Recovery
Complete the following steps:
- Go to Environment and select Servers. From the list of servers, select WLS_XYZn , click the Configuration tab, and then click the Migration tab.
- Use Table 22-10 to set the recommended JTA Migration Policy depending on the cluster that you want to scale out.
- Click Save, and then click Activate Changes.
- In the servers already existing in the cluster, verify that the list of the JTA
candidate servers for JTA migration is empty:
- Click Environment and select Servers.
- From the Summary of Servers in the environment, select a server.
- Select the Configuration tab, and then click the Migration tab.
- Check the JTA Candidate Servers list and verify that the list is empty (an empty list indicates that all the servers in the cluster are JTA candidate servers). The list should be empty out-of-the-box so no changes are needed.
- If the server list is not empty, you should modify the list to make it blank. Or, if your list is not empty because you explicitly decided to constraint the migration to some specific servers only, modify it as per your preferences to accommodate the new server. Save and activate the changes. Restart the existing servers for this change to become effective.
- If the cluster you are scaling up is configured for automatic service migration, use the Oracle WebLogic Server Administration Console to update the automatically created WLS_XYZn (migratable) with the recommended migration policy, because by default it is set to Manual Service Migration Only.
Use the following table for the list of migratable targets to update:
Table 22-11 The Recommended Migratable Targets to Update
Cluster to Scale Up Migratable Target to Update Migration Policy WSM-PM_Cluster
Not applicable
Not applicable
SOA_Cluster
WLS_SOA3 (migratable)
Auto-Migrate Failure Recovery Services
ESS_Cluster
Not applicable
Not applicable
OSB_Cluster
WLS_OSB3 (migratable)
Auto-Migrate Failure Recovery Services
BAM_Cluster
WLS_BAM3 (migratable)
Auto-Migrate Exactly-Once Services
MFT_Cluster
WLS_MFT3 (migratable)
Auto-Migrate Failure Recovery Services
- Go to Environment, select Clusters, and then click Migratable Servers.
- Click Lock and Edit.
- Click WLS_XYZ3 (migratable).
- Go to the Configuration tab and then Migration.
- Change the Service Migration Policy to the value listed in the table.
- Leave the Constrained Candidate Server list blank in case there are chosen servers. If no servers are selected, you can migrate this migratable target to any server in the cluster.
- Click Save, and then click Activate Changes.
- For components that use multiple migratable targets, such as BAM, in addition to step 6, create another migratable target. BAM is used here as an example: use the Oracle WebLogic Server Administration Console to clone WLS_BAM3 (migratable) into a new migratable target.
- Go to Environment, select Clusters, and then select Migratable Servers.
- Click Lock and Edit.
- Click WLS_BAM3 (migratable) and click Clone.
- Name the new target as WLS_BAM3_bam-exactly-once (migratable).
- Click the new migratable server.
- Go to the Configuration tab and select Migration.
- If not set, change the Service Migration Policy to Auto-Migrate Exactly-Once Services.
- Leave the Constrained Candidate Server list blank. If no servers are selected, you can migrate this migratable target to any server in the cluster.
- Click Save, and then click Activate Changes.
- Verify that the Constrained Candidate Server list in the
existing migratable servers in the cluster is empty. It should be empty out-of-the-box
because the Configuration Wizard leaves it empty. An empty candidate list means that all
the servers in the cluster are candidates, which is the best practice.
- Go to each migratable server.
- On the Configuration tab, click Migration and select Constrained Candidate Server.
- Ensure that server list is empty. It should be empty out-of-the-box.
- If the server list is not empty, you should modify the list to make it blank. Or, if your list is not empty because you explicitly decided to constraint the migration to some specific servers only, modify it as per your preferences to accommodate the new server. Save, and activate the changes. Restart the existing servers for this change to become effective
- Create the required persistent stores for the JMS servers.
- Sign in to WebLogic Console and go to Services and select Persistent Stores.
- Click New and select Create JDBCStore.
Use the following table to create the required persistent stores:
Note:
The number in the names and prefixes in the existing resources were assigned automatically by the Configuration Wizard during the domain creation.
For example:UMSJMSJDBCStore_auto_1 — soa_1 UMSJMSJDBCStore_auto_2 — soa_2 BPMJMSJDBCStore_auto_1 — soa_3 BPMJMSJDBCStore_auto_2 — soa_4 SOAJMSJDBCStore_auto_1 — soa_5 SOAJMSJDBCStore_auto_2 — soa_6
Review the existing prefixes and select a new and unique prefix and name for each new persistent store.
To avoid naming conflicts and simplify the configuration, new resources are qualified with the scaled tag and are shown here as an example.
Table 22-12 The New Resources Qualified with the Scaled Tag
Cluster to Scale Up Persistent Store Prefix Name Data Source Target WSM-PM_Cluster
Not applicable
Not applicable
Not applicable
Not applicable
SOA_Cluster
UMSJMSJDBCStore_soa_scaled_3
soaums_scaled_3
WLSSchemaDataSourc
WLS_SOA3 (migratable)
SOAJMSJDBCStore_ soa_scaled_3
soajms_scaled_3
WLSSchemaDataSourc
WLS_SOA3 (migratable)
BPMJMSJDBCStore_ soa_scaled_3
soabpm_scaled_3
WLSSchemaDataSourc
WLS_SOA3 (migratable)
ESS_Cluster
Not applicable
Not applicable
Not applicable
Not applicable
OSB_Cluster
UMSJMSJDBCStore_osb_scaled_3
osbums_scaled_3
WLSSchemaDataSourc
WLS_OSB3 (migratable)
OSBJMSJDBCStore_osb_scaled_3
osbjms_scaled_3
WLSSchemaDataSourc
WLS_OSB3 (migratable)
BAM_Cluster
UMSJMSJDBCStore_bam_scaled_3
bamums_scaled_3
WLSSchemaDataSourc
WLS_BAM3_bam-exactly-once (migratable)
BamPersistenceJmsJDBCStore_bam_scaled_3
bamP_scaled_3
WLSSchemaDataSourc
WLS_BAM3_bam-exactly-once (migratable)
BamReportCacheJmsJDBCStore_bam_scaled_3
bamR_scaled_3
WLSSchemaDataSourc
WLS_BAM3_bam-exactly-once (migratable)
BamAlertEngineJmsJDBCStore_bam_scaled_3
bamA_scaled_3
WLSSchemaDataSourc
WLS_BAM3_bam-exactly-once (migratable)
BamJmsJDBCStore_bam_scaled_3
bamjms_scaled_3
WLSSchemaDataSourc
WLS_BAM3_bam-exactly-once (migratable)
BamCQServiceJmsJDBCStore_bam_scaled_3
bamC_scaled_3
WLSSchemaDataSourc
WLS_BAM3*
MFT_Cluster
MFTJMSJDBCStore_mft_scaled_3
mftjms_scaled_3
WLSSchemaDataSourc
WLS_MFT3 (migratable)
Note:
(*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets. - Create the required JMS Servers for the new managed server.
- Go to WebLogic Console, click Services, select Messaging, and then click JMS Servers.
- Click Lock and Edit.
- Click New.
Use the following table to create the required JMS Servers. Assign to each JMS Server the previously created persistent stores:
Note:
The number in the names of the existing resources are assigned automatically by the Configuration Wizard during domain creation. Review the existing JMS server names and select a new and unique name for each new JMS server. To avoid naming conflicts and simplify the configuration, new resources are qualified with the product_scaled_N tag and are shown here as an example.Cluster to Scale Up JMS Server Name Persistent Store Target WSM-PM_Cluster
Not applicable
Not applicable
Not applicable
SOA_Cluster
UMSJMSServer_soa_scaled_3
UMSJMSJDBCStore_soa_scaled_3
WLS_SOA3 (migratable)
SOAJMSServer_ soa_scaled_3
SOAJMSJDBCStore_ soa_scaled_3
WLS_SOA3 (migratable)
BPMJMSServer_ soa_scaled_3
BPMJMSJDBCStore_ soa_scaled_3
WLS_SOA3 (migratable)
ESS_Cluster
Not applicable
Not applicable
Not applicable
OSB_Cluster
UMSJMSServer_osb_scaled_3
UMSJMSJDBCStore_osb_scaled_3
WLS_OSB3 (migratable)
wlsbJMSServer_osb_scaled_3
OSBJMSJDBCStore_osb_scaled_3
WLS_OSB3 (migratable)
BAM_Cluster
UMSJMSServer_bam_scaled_3
UMSJMSJDBCStore_bam_scaled_3
WLS_BAM3_bam-exactly-once (migratable)
BamPersistenceJmsServer_bam_scaled_3
BamPersistenceJmsJDBCStore_bam_scaled_3
WLS_BAM3_bam-exactly-once (migratable)
BamReportCacheJmsServer_bam_scaled_3
BamReportCacheJmsJDBCStore_bam_scaled_3
WLS_BAM3_bam-exactly-once (migratable)
BamAlertEngineJmsServer_bam_scaled_3
BamAlertEngineJmsJDBCStore_bam_scaled_3
WLS_BAM3_bam-exactly-once (migratable)
BAMJMSServer_bam_scaled_3
BamJmsJDBCStore_bam_scaled_3
WLS_BAM3_bam-exactly-once (migratable)
BamCQServiceJmsServer_bam_scaled_3
BamCQServiceJmsJDBCStore_bam_scaled_3
WLS_BAM3*
MFT_Cluster
MFTJMSServer_mft_scaled_3
MFTJMSJDBCStore_mft_scaled_3
WLS_MFT3 (migratable)
Note:
(*) BamCQServiceJmsServers host local queues for the BAM CQService (Continuous Query Engine) and are meant to be local. They are intentionally targeted to the WebLogic servers directly and not to the migratable targets. - Update the SubDeployment Targets for JMS Modules (if applicable) to include the recently created JMS servers.
- Expand Services, click Messaging, and then select JMS Modules.
- Select a JMS module. For example:
BPMJMSModule
.Use the following table to identify the JMS modules to update depending on the cluster that you are scaling up:
Cluster to Scale-up JMS Module to Update JMS Server to Add to the Subdeployment WSM-PM_Cluster
Not applicable
Not applicable
SOA_Cluster
UMSJMSSystemResource *
UMSJMSServer_soa_scaled_3
SOAJMSModule
SOAJMSServer_soa_scaled_3
BPMJMSModule
BPMJMSServer_soa_scaled_3
ESS_Cluster
Not applicable
Not applicable
OSB_Cluster
UMSJMSSystemResource *
UMSJMSServer_osb_scaled_3
jmsResources (scope Global)
wlsbJMSServer_osb_scaled_3
BAM_Cluster
BamPersistenceJmsSystemModule
BamPersistenceJmsServer_bam_scaled_3
BamReportCacheJmsSystemModule
BamReportCacheJmsServer_bam_scaled_3
BamAlertEngineJmsSystemModule
BamAlertEngineJmsServer_bam_scaled_3
BAMJMSSystemResource
BAMJMSServer_bam_scaled_3
BamCQServiceJmsSystemModule
Not applicable (no subdeployment)
UMSJMSSystemResource *
UMSJMSServer_bam_scaled_3 *
MFT_Cluster
MFTJMSModule
MFTJMSServer_mft_scaled_3
(*) Some modules (UMSJMSystemResource, ProcMonJMSModule) may be targeted to more than one cluster. Ensure that you update the appropriate subdeployment in each case. - Go to Configuration and select Subdeployment.
- Add the corresponding JMS Server to the existing subdeployment.
Note:
The Subdeployment module name is a random name in the form ofSOAJMSServerXXXXXX
,UMSJMSServerXXXXXX
, orBPMJMSServerXXXXXX
, resulting from the Configuration Wizard JMS configuration for the first two servers (WLS_SOA1
andWLS_SOA2
). - Click Save, and then click Activate Changes.
- In case you are scaling up a BAM cluster, you need to create some local queues for the new server in the
BamCQServiceJmsSystemModule
module. Follow these steps to create them:- Go to WebLogic Console, select Services, click Messaging, and then select JMS Modules.
- Click Lock & Edit.
- Click in BamCQServiceJmsSystemModule.
- Click Targets.
- Add WLS_BAM3 to the targets and click Save.
- Click New.
- Select Queue and click Next.
- Name it BamCQServiceAlertEngineQueue_auto_3, and click Next.
- Create a new Subdeployment with the target
BamCQServiceJmsServer_bam_scaled_3
and select it for the queue. - Click Finish.
- Click in the newly created queue
BamCQServiceAlertEngineQueue_auto_3
- Go to Configuration, select General, and then click Advanced.
- Set Local JNDI Name to queue/oracle.beam.cqservice.mdbs.alertengine.
- Click Save.
- Repeat these steps to create the other queue
BamCQServiceReportCacheQueue_auto_3
with the information in Table 22-7. - After you finish, you have these new local queues. You have to create two local queues for the new server with the information in Table 22-7.
Table 22-13 Information to Create the Local Queues
Name Type Local JNDI Name Subdeployment BamCQServiceAlertEngineQueue_auto_3
Queue
queue/oracle.beam.cqservice.mdbs.alertengine
BamCQServiceJmsServer_auto_3
BamCQServiceReportCacheQueue_auto_3
Queue
queue/oracle.beam.cqservice.mdbs.reportcache
BamCQServiceJmsServer_auto_3
- Click Activate Changes.
- Start the new managed server.
- When scaling up the MFT_Cluster:
Default SFTP/FTP ports are used in the new server. If you are not using the defaults, follow the steps described in Configuring the SFTP Ports to configure the ports in the SFTP server . When scaling up, use different ports SFTP/FTP for the new server that do not conflict with the existing server in the same machine.
- Update the web tier configuration to include this new server:
-
If you are using OTD, log in to Enterprise Manager and update the corresponding origin pool as explained in Creating the Required Origin Server Pools to add the new server to the pool.
-
If you are using OHS, there is no need to add the new server to OHS. By default Dynamic Server List is used, which means that the list of the servers in the cluster is automatically updated when a new node become part of the cluster, so adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant
server:port
combinations to guarantee initial contact in case of a partial outage.If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server would be up, update the WebLogicCluster directive to include the new server.
<Location /osb> WLSRequest ON WebLogicCluster SOAHOST1:8011,SOAHOST2:8012,SOAHOST3:8013 WLProxySSL ON WLProxySSLPassThrough ON </Location>
-
Parent topic: Scaling Up the Topology for Static Clusters
Verifying the Scale Up of Static Clusters
- Verify the correct routing to web applications.
For example:
- Access the application on the load balancer:
soa.example.com
/soa-infra - Check that there is activity in the new server also:Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
- You can also verify that the web sessions are created in the new server:
-
Go to Cluster > Deployments.
-
Expand soa-infra, click soa-infra Web application.
-
Go to Monitoring to check the web sessions in each server.
You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:
Cluster to Verify Sample URL to Test Web Application Module WSM-PM_Cluster
http://soainternal.example.com/wsm-pm
wsm-pm > wsm-pm
SOA_Cluster
https://soa.example.com/soa-infra
soa-infra > soa-infra
ESS_Cluster
https://soa.example.com/ESSHealthCheck
ESSHealthCheck
OSB_Cluster
https://osb.example.com/sbinspection.wsil
Service Bus WSIL
MFT_Cluster
https://mft.example.com/mftconsole
mftconsole
BAM_Cluster
https://soa.example.com/bam/composer
BamComposer > /bam/composer
-
- Access the application on the load balancer:
- Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
- Go to JMS Servers.
- Click JMS Server > Monitoring.
- Verify the service migration, as described in Validating Automatic Service Migration in Static Clusters.
Parent topic: Scaling Up the Topology for Static Clusters
Scaling Up the Topology for Dynamic Clusters
This section lists the prerequisites, explains the procedure to scale out the topology with dynamic clusters, describes the steps to verify the scale-up process, and finally the steps to scale down (shrink).
You already have a node that runs a managed server that is configured with Fusion Middleware components. The node contains a WebLogic Server home and an Oracle Fusion Middleware SOA home in shared storage. Use these existing installations and domain directories, to create the new managed servers. You do not need to install WLS or SOA binaries or to run pack
and unpack
commands, because the new server is going to run in the existing node.
- Prerequisites for Scaling Up
- Scaling Up a Dynamic Cluster
- Verifying the Scale Up of Dynamic Clusters
Parent topic: Scaling Up the Topology
Prerequisites for Scaling Up
Before performing a scale up of the topology, you must ensure that you meet the following prerequisites:
-
The starting point is a cluster with managed servers already running.
-
It is assumed that the cluster syntax is used for all internal RMI invocations, JMS adapter, and so on.
Parent topic: Scaling Up the Topology for Dynamic Clusters
Scaling Up a Dynamic Cluster
Use the SOA EDG topology as a reference, with two application tier hosts (SOAHOST1 and SOAHOST2), each running one managed server of each cluster. The example explains how to add a third managed server to the cluster that runs in SOAHOST1. WLS_XYZn
is the generic name given to the new managed server that you add to the cluster. Depending on the cluster that is being extended and the number of existing nodes, the actual names will be WLS_SOA3
, WLS_OSB3
, WLS_ESS4
, and so on.
To scale up the cluster, complete the following steps:
- In scale-up, there is no need of adding a new machine to the domain as the new server would be added to an existing machine.
If the CalculatedMachineNames attribute is set to true, then the MachineNameMatchExpression attribute is used to select the set of machines used for the dynamic servers. Assignments are made by using a round-robin algorithm.
This following table lists examples of machine assignments in a dynamic cluster.Table 22-14 Examples of machine assignments in a dynamic cluster
Machines in Domain MachineNameMatchExpression Configuration Dynamic Server Machine Assignments SOAHOST1 ,SOAHOST2 SOAHOST*
dyn-server-1: SOAHOST1
dyn-server-2: SOAHOST2
dyn-server-3: SOAHOST1
dyn-server-4: SOAHOST2
...
SOAHOST1 ,SOAHOST2, SOAHOST3 SOAHOST*
dyn-server-1: SOAHOST1
dyn-server-2: SOAHOST2
dyn-server-3: SOAHOST3
dyn-server-4: SOAHOST1
...
See https://docs.oracle.com/middleware/1212/wls/CLUST/dynamic_clusters.htm#CLUST678.
- If you are using SOAHOST{$id} as listen address in the template, update the
/etc/hosts
files to add the aliasSOAHOSTN
for the new node as described in the Verifying IP Addresses and Host Names in DNS or Hosts File.The new server
WLS_XYZn listens
inSOAHOSTn
. This alias must be resolved to the corresponding IP address of the system host where the new managed server runs. See Table 22-14.Example:10.229.188.204 host1-vip.example.com host1-vip ADMINVHN 10.229.188.205 host1.example.com host1 SOAHOST1 SOAHOST3 10.229.188.206 host2.example.com host2 SOAHOST2 10.229.188.207 host3.example.com host3 WEBHOST1 10.229.188.208 host4.example.com host4 WEBHOST2
If you are using the machine name macro
${machineName}
in the listen address of the template, the new serverWLS_xYZn
listens in the address of SOAHOSTn machine. In this case, adding aliases to/etc/hosts
file is not necessary when you scale up the dynamic cluster. See Configuring Listen Addresses in Dynamic Cluster Server Templates. - Use the Oracle WebLogic Server Administration Console to increase the dynamic cluster to include a new managed server:
- Click Lock & Edit.
- Go to Domain > Environment > Clusters.
- Select the cluster to want to scale out.
- Go to Configuration > Servers.
- Set Dynamic Cluster Size to 3. By default, the cluster size is 2.
- Click Saveand then, click Activate Changes.
Note:
In case of scaling-out to more than three servers, we also need to update Number of servers in cluster Address that is 3 by default. Although, Oracle recommends that you use the cluster syntax for t3 calls, the cluster address is used if calling from external elements through t3, for EJB stubs, and so on.
- When scaling up the SOA_Cluster:
If BPM Web Forms are used, update the startWebLogic.sh in MSERVER_HOME customizations for BPM to include the new node as described in Updating SOA BPM Servers for Web Forms.
- When scaling up the OSB_Cluster:Restart the Admin Server to view the new server in the Service Bus Dashboard.
- When scaling up the MFT_Cluster:Default SFTP/FTP ports are used in the new server. If you are not using the default values, follow the steps described in Configuring the SFTP Ports to configure the ports in the SFTP server, .
When scaling up, use different ports SFTP/FTP for the new server that does not conflict with the existing server in the same machine.
- Update the web tier configuration to include this new server:
-
If you are using OTD, login to Enterprise Manager and update the corresponding origin pool as explained in Creating the Required Origin Server Pools to add the new server to the pool.
-
If you are using OHS, there is no need to add the new server to OHS. By default Dynamic Server List is used, which means that the list of the servers in the cluster is automatically updated when a new node become part of the cluster, so adding it to the list is not mandatory. The WebLogicCluster directive needs only a sufficient number of redundant
server:port
combinations to guarantee initial contact in case of a partial outage.If there are expected scenarios where the Oracle HTTP Server is restarted and only the new server would be up, update the WebLogicCluster directive to include the new server.
For example:
<Location /osb> WLSRequest ON WebLogicCluster SOAHOST1:8011,SOAHOST2:8012,SOAHOST3:8013 WLProxySSL ON WLProxySSLPassThrough ON </Location>
-
- Start the new managed server from the Oracle WebLogic Server.
- Verify that the newly created managed server is running.
Parent topic: Scaling Up the Topology for Dynamic Clusters
Verifying the Scale Up of Dynamic Clusters
- Verify the correct routing to web applications.
For example:
- Access the application on the load balancer:
soa.example.com
/soa-infra - Check that there is activity in the new server also:Go to Cluster > Deployments > soa-infra > Monitoring > Workload.
- You can also verify that the web sessions are created in the new server:
-
Go to Cluster > Deployments.
-
Expand soa-infra, click soa-infra Web application.
-
Go to Monitoring to check the web sessions in each server.
You can use the sample URLs and the corresponding web applications that are identified in the following table, to check if the sessions are created in the new server for the cluster that you are scaling out:
Cluster to Verify Sample URL to Test Web Application Module WSM-PM_Cluster
http://soainternal.example.com/wsm-pm
wsm-pm > wsm-pm
SOA_Cluster
https://soa.example.com/soa-infra
soa-infra > soa-infra
ESS_Cluster
https://soa.example.com/ESSHealthCheck
ESSHealthCheck
OSB_Cluster
https://osb.example.com/sbinspection.wsil
Service Bus WSIL
MFT_Cluster
https://mft.example.com/mftconsole
mftconsole
BAM_Cluster
https://soa.example.com/bam/composer
BamComposer > /bam/composer
-
- Access the application on the load balancer:
- Verify that JMS messages are being produced and consumed to the destinations, and produced and consumed from the destinations, in the three servers.
- Go to JMS Servers.
- Click JMS Server > Monitoring.
- Verify the service migration, as described in Configuring Automatic Service Migration for Dynamic Clusters.
Parent topic: Scaling Up the Topology for Dynamic Clusters
Scaling Down the Topology
When you scale down the topology, you remove the managed servers that were added to the existing hosts.
Parent topic: Scaling Procedures for an Enterprise Deployment
Scaling Down the Topology for Static Clusters
- To scale down the cluster without any JMS data loss, perform the steps described in Managing the JMS Messages in a SOA Server:
-
To drain the messages, see Draining the JMS Messages from a SOA Server.
-
To import the messages into another member of the cluster, see Importing the JMS Messages into a SOA Server.
After you complete the steps, continue with the scale-down procedure.
-
- Check the pending JTA. Before you shut down the server, review if there are any active JTA transactions in the server that you want to delete. Navigate to the WebLogic Console and click Environment > Servers > <server name> > Monitoring > JTA > Transactions.
Note:
If you have used the Shutdown Recovery policy for JTA, the transactions are recovered in another server after you shut down the server.
- Shut down the server by using the When works completes option.
Note:
This operation can take long time if there are active HTTP sessions or long transactions in the server. For more information about graceful shutdown, see Using Server Life Cycle Commands in Administering Server Startup and Shutdown for Oracle WebLogic Server
- Use the Oracle WebLogic Server Administration Console to delete the migratable target that is used by the server that you want to delete.
- Click Lock & Edit.
- Go to Domain > Environment > Cluster > Migratable Target.
- Select the migratable target that you want to delete.
- Click Delete.
- Click Yes.
- Click Activate Changes.
- Use the Oracle WebLogic Server Administration Console to delete the new server:
- Click Lock & Edit.
- Go to Domain > Environment > Servers.
- Select the server that you want to delete.
- Click Delete.
- Click Yes.
- Click Activate Changes.
Note:
If migratable target was not deleted in the previous step, you get the following error message:
The following failures occurred: --MigratableTargetMBean WLS_SOA3_soa-failure-recovery (migratable) does not have a preferred server set. Errors must be corrected before proceeding.
- Use the Oracle WebLogic Server Administration Console to update the subdeployment of each JMS Module that is used by the cluster you are shrinking.
Use the following table to identify the module for each cluster and perform this action for each module:
Cluster to Scale Down Persistent Store JMS Server to Delete from the Subdeployment WSM-PM_Cluster
Not applicable
Not applicable
SOA_Cluster
UMSJMSSystemResource
SOAJMSModule
BPMJMSModule
UMSJMSServer_soa_scaled_3
SOAJMSServer_soa_scaled_3
BPMJMSServer_soa_scaled_3
ESS_Cluster
Not applicable
Not applicable
OSB_Cluster
UMSJMSSystemResource
jmsResources (scope Global)
UMSJMSServer_osb_scaled_3
wlsbJMSServer_osb_scaled_3
BAM_Cluster
BamPersistenceJmsSystemModule
BamReportCacheJmsSystemModule
BamAlertEngineJmsSystemModule
BAMJMSSystemResource
BamCQServiceJmsSystemModule
BamPersistenceJmsServer_bam_scaled_3
BamReportCacheJmsServer_bam_scaled_3
BamAlertEngineJmsServer_bam_scaled_3
BAMJMSServer_bam_scaled_3
Not applicable (no subdeployment)
MFT_Cluster
MFTJMSModule
MFTJMSServer_mft_scaled_3
- Click Lock & Edit.
- Go to Domain > Services > Messaging > JMS Modules.
- Click the JMS module.
- Click subdeployment.
- Unselect the JMS server that was created for the deleted server.
- Click Save.
- Click Activate Changes.
- In case you are want to scale down a BAM cluster, use the Oracle WebLogic Server Administration Console to delete the local queues that are created for the new server:
- Click Lock & Edit.
- Go to WebLogic Console>Services>Messaging> JMS Modules.
- Click in
BamCQServiceJmsSystemModule
. - Delete the local queues that are created for the new server:
-
BamCQServiceAlertEngineQueue_auto_3
-
BamCQServiceReportCacheQueue_auto_3
-
- Click Activate Changes.
- Use the Oracle WebLogic Server Administration Console to delete the JMS servers:
- Click Lock & Edit.
- Go to Domain > Services > Messaging > JMS Servers.
- Select the JMS Servers that you created for the new server.
- Click Delete.
- Click Yes.
- Click Activate Changes.
- Use the Oracle WebLogic Server Administration Console to delete the JMS persistent stores:
- Click Lock & Edit.
- Go to Domain > Services > Persistent Stores.
- Select the Persistent Stores that you created for the new server.
- Click Delete.
- Click Yes.
- Click Activate Changes.
- Update the Web tier configuration to remove references to the new server.
Parent topic: Scaling Down the Topology
Scaling Down the Topology in a Dynamic Cluster
- To scale down the cluster without any JMS data loss, perform the steps described in Managing the JMS Messages in a SOA Server:
-
To drain the messages, see Draining the JMS Messages from a SOA Server.
-
To import the messages into another member of the cluster, see Importing the JMS Messages into a SOA Server.
After you complete the steps, continue with the scale-down procedure.
-
- Check the pending JTA. Before you shut down the server, review if there are any active JTA transactions in the server that you want to delete. Navigate to the WebLogic Console and click Environment > Servers > <server name> > Monitoring > JTA > Transactions.
Note:
If you have used the Shutdown Recovery policy for JTA, the transactions are recovered in another server after you shut down the server.
- Shut down the server by using the When works completes option.
Note:
-
This operation can take long time if there are active HTTP sessions or long transactions in the server. For more information about graceful shutdown, see Using Server Life Cycle Commands in Administering Server Startup and Shutdown for Oracle WebLogic Server
-
In Dynamic Clusters, the JMS servers that are running in the server that you want to delete, and use “Always” as the migration policy, are migrated to another member in the cluster at this point (its server was just shutdown). The next time you restart the member that hosts them, these JMS servers will not start because their preferred server is not present in the cluster anymore. But you must check if they get any new messages during this interim period because the messages could be lost. To preserve the messages, pause the production and export the messages from these JMS servers before you restart any server in the cluster.
-
- Use the Oracle WebLogic Server Administration Console to reduce the dynamic cluster:
- Click Lock & Edit.
- Go to Domain > Environment > Clusters.
- Select the cluster to want to scale-down.
- Go to Configuration > Servers.
- Set again the Dynamic Cluster Size to
2
.
Parent topic: Scaling Down the Topology