This chapter describes some operations that you can perform after you have set up the topology. These operations involve migrating, scaling, and backing up your topology.
This chapter contains the following sections:
Section 9.4, "Connecting Two Subnets Used by Different Departments"
Section 9.5, "Scaling Out the Topology - Adding Managed Servers to New Compute Nodes"
Section 9.6, "Scaling Down the Topology: Deleting Managed Servers"
Section 9.8, "Patching Oracle Software and Updating Firmware in Oracle Exalogic Environment"
If you are an Oracle Solaris user, read Section 3.1, "Important Notes for Oracle Solaris Users" before you complete the procedures described in this chapter.
The following are the prerequisites for managing your enterprise deployment:
Preconfigure the database and environment, as described in Section 3, "Network, Storage, and Database Preconfiguration".
Ensure that you have created and configured the Managed Servers in a cluster for ComputeNode1
and ComputeNode2
, as described in Section 5, "Configuring Oracle Fusion Middleware".
Your Node Manager should be configured on ComputeNode1
and ComputeNode2
, as described in Section 5.7, "Configuring Java Node Manager".
Ensure that you have activated the Exalogic-specific features and enhancements in Oracle WebLogic, as described in Chapter 7, "Enabling Exalogic-Specific Enhancements in Oracle WebLogic Server 11g Release 1 (10.3.4)".
Ensure that your Administration Server is up and running. To start the Administration Server, see Section 5.6, "Starting the Administration Server on ComputeNode1".
Server migration is required for applications that have critical data, such as persistent JMS or transaction logs, that needs to be recovered quickly in the event of a failure.
In this enterprise deployment topology, the sample application dizzyworld.ear
(see, Chapter 8, "Deploying a Sample Web Application to an Oracle WebLogic Cluster") does not leverage JMS or JTA features, so server migration is not required for this sample application. If your applications do require persistent JMS or JTA features, then you may implement server migration.
Managed Servers running on ComputeNode1
are configured to restart on ComputeNode2
when a failure occurs. Similarly, Managed Servers running on ComputeNode2
are configured to restart on ComputeNode1
if a failure occurs. For this configuration, the Managed Servers running on ComputeNode1
and ComputeNode2
listen on specific floating IPs that are failed over by Oracle WebLogic Server Migration. For more information, see Section 3.3.3, "Enterprise Deployment Network Configuration".
Configuring server migration for Oracle Exalogic Managed Servers involves the following steps:
Setting Up a User and Tablespace for the Server Migration Leasing Table
Setting Environment and Superuser Privileges for the wlsifconfig.sh Script
The following are the prerequisites for configuring server migration:
Ensure that the Grid Link Data Sources are created for ComputeNode1
and ComputeNode2
, as described in Section 7.6, "Configuring Grid Link Data Source for Dept1_Cluster1".
Complete the mandatory patch requirements.
When leveraging WebLogic Server network channels that map to multiple network interfaces with the WebLogic Server migration framework, review the Knowledge Base article (Title: Oracle Exalogic Elastic Cloud 11g R1 - Known Issues Doc ID: 1268557.1) located at My Oracle Support.
Enter the My Oracle Support URL (https://support.oracle.com/
) in a Web browser.
Click Sign In and enter your My Oracle Support username and password.
Search for Doc ID: 1268557.1 in the Search Knowledge Base search box, which is the Document ID of the article related to this issue.
Set up a user and tablespace for the server migration leasing table as follows:
Ensure that the database connectivity for your Oracle Exalogic Machine is established, as described in Section 3.5.2, "Connecting to Oracle Database Over Ethernet". Alternatively, if you are connecting your Oracle Exalogic machine to Oracle Exadata Database Machine via InfiniBand, verify the database connectivity and access.
Create a tablespace named leasing
. For example, log on to SQL*Plus as the sysdba
user and run the following command:
SQL> create tablespace leasing logging datafile 'DB_HOME/oradata/orcl/leasing.dbf' size 32m autoextend on next 32m maxsize 2048m extent management local;
Note:
Creating a table space is optional.
Create a user named leasing
and assign to it the leasing tablespace.
SQL> create user leasing identified by welcome1; SQL> grant create table to leasing; SQL> grant create session to leasing; SQL> alter user leasing default tablespace leasing; SQL> alter user leasing quota unlimited on LEASING;
Log in as the leasing
user, and create the leasing table using the leasing.ddl
script.
Copy the leasing.ddl
file located in the /u01/app/FMW_Product1/Oracle/Middleware/wlserver_10.3/server/db2/oracle/920
directory to your database node.
Connect to the database as the leasing
user.
Run the leasing.ddl
script in SQL*Plus.
SQL> @copy_location/leasing.ddl;
Ensure that you have created the GridLink Data Source, as described in Section 7.6.2, "Creating a GridLink Data Source on Dept1_Cluster1".
You must complete this task for the node managers on both ComputeNode1
and ComputeNode2
where server migration is being configured. The nodemanager.properties
file is located in the /u01/Dept_1/admin/el01cn01/nodemanager
directory on ComputeNode1
and /u01/Dept_1/admin/el01cn02/nodemanager
on ComputeNode2
:
Note:
Ensure that you edit the nodemanager.properties
files of both ComputeNode1
and ComputeNode2
.
bond0=10.0.0.1-10.0.0.17,NetMask=255.255.255.224 bond1=10.1.0.1-10.1.0.17,NetMask=255.255.255.224 UseMACBroadcast=true
Verify in Node Manager's output (shell where Node Manager is started) that these properties are being used, or problems may arise during migration. You should see something like this in the Node Manager's output:
bond0=10.0.0.1-10.0.0.17,NetMask=255.255.255.224 bond1=10.1.0.1-10.1.0.17,NetMask=255.255.255.224 UseMACBroadcast=true
For more information, see the "Reviewing nodemanager.properties" section in the Oracle Fusion Middleware Node Manager Administrator's Guide for Oracle WebLogic Server.
Note:
The steps below are not required if the server properties (start properties) have been properly set and the Node Manager can start the servers remotely.
Set the following property in the nodemanager.properties
file.
StartScriptEnabled
Set this property to true
. This is required for the shiphome to enable the Node Manager to start the managed servers.
Start the Node Manager on ComputeNode1
and ComputeNode2
by running the startNodeManager.sh
script. This scripts are located at:
In ComputeNode1
: /u01/Dept1/admin/el01cn01/nodemanager
directory
In ComputeNode2
: /u01/Dept1/admin/el01cn02/nodemanager
directory
Note:
When running Node Manager from a shared storage installation, multiple nodes are started using the same nodemanager.properties
file. However, each node may require different NetMask
or Interface
properties. In this case, specify individual parameters on a per-node basis using environment variables. For example, to use a different interface in ComputeNode
n
, use the Interface
environment variable as follows: ComputeNoden> export JAVA_OPTIONS=-DInterface=bond0
and start Node Manager after the variable has been set in the shell.
Set environment and superuser privileges for the wlsifconfig.sh
script:
Ensure that your PATH
environment variable includes these files:
Table 9-1 Files Required for the PATH Environment Variable
File | Located in this directory |
---|---|
|
|
|
|
|
|
Run wlsifconfig.sh -listif bond0
script and verify that your network interface and the netmask (for example, 255.255.255.192
for the Dept_1
subnet used in the example configuration) are correct.
Grant sudo configuration for the wlsifconfig.sh script
.
Note:
Ensure that you run sudo /sbin/ifconfig
or sudo /sbin/arping
. Running this command will disable any password input prompt.
Configure sudo
to work without a password prompt.
For security reasons, sudo should be restricted to the subset of commands required to run the wlsifconfig.sh
script. For example, to set the environment and superuser privileges for the wlsifconfig.sh
script, complete these steps:
Grant sudo privilege to the WebLogic user (weblogic
) with no password restriction, and grant execute privileges on the /sbin/ifconfig and /sbin/arping binaries.
Make sure that the script is executable by the WebLogic user (weblogic). The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for weblogic
and also over ifconfig
and arping
:
weblogic ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping
Note:
Contact your system administrator for the sudo and system rights as appropriate to this step.
You must configure server migration targets. To do so, complete the following:
You must configure server migration targets for the cluster (Dept1_Cluster1
). Configuring Cluster Migration sets the DataSourceForAutomaticMigration
property to true. Follow the steps below to configure cluster migration in a cluster:
Log in to the Oracle WebLogic Server Administration Console, using the following URL:
http://ADMINVHN1:7001/console
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit.
In the left pane of the Console, expand Environment and then select Clusters.
The Summary of Clusters page is displayed.
Click Dept1_Cluster1 for which you want to configure migration in the Name column of the table.
The Settings for Dept1_Cluster1 page is displayed.
Click the Migration tab.
Enter the following details:
For the Candidate Machines For Migratable Servers, select ComputeNode2 under Available, and then click the right arrow.
For Migration Basis, select Database.
For Data Source For Automatic Migration, select gridlink. This is the data source you created in Section 7.6.2, "Creating a GridLink Data Source on Dept1_Cluster1".
Click Save.
Click Activate Changes.
Set the Managed Servers on ComputeNode1
for server migration as follows:
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit.
In the left pane of the Console, expand Environment and then select Servers.
Select WLS1.
The Settings for WLS1 is displayed.
Click the Migration tab.
In the Migration Configuration page, enter the following details:
Select Automatic Server Migration Enabled. This enables the Node Manager to start a failed server on the target node automatically.
For Candidate Machines, select ComputeNode2 under Available and click the right arrow.
For JMS Service Candidate Servers, select all of the Managed Servers (Table 5-2) in ComputeNode2
and click the right arrow.
Select Automatic JTA Migration Enabled. This enables the automatic migration of the JTA Transaction Recovery System on this server.
For JTA Candidate Servers, select all of the Managed Servers (Table 5-2) in ComputeNode2
and click the right arrow.
Click Save.
Click Activate Changes.
Restart the Administration Server and the servers for which server migration has been configured.
To restart the Administration Server, use the procedure in Section 5.8, "Restarting the Administration Server on ComputeNode1."
Tip:
Click Customize this table in the Summary of Servers page, move Current Machine from the Available Window to the Chosen window to view the machine on which the server is running. This will be different from the configuration if the server gets migrated automatically.
You must test the server migration. To verify that Server Migration is working properly, follow these steps:
Notes:
The migratable IP address should not be present on the interface of any of the candidate machines before the migratable server is started.
Ensure that a minimum of two Managed Servers in the cluster are up and running before you test the server migration.
Testing from ComputeNode1:
Stop the ComputeNode1
Managed Server.
To do this, run this command:
ComputeNode1> kill -9 <pid>
pid specifies the process ID of the Managed Server (WLS1
). You can identify the pid in the node by running this command:
ComputeNode1> ps -ef | grep WLS1
Watch the Node Manager console: you should see a message indicating that ComputeNode1
's floating IP has been disabled.
Wait for the Node Manager to try a second restart of ComputeNode1
. Node Manager waits for a fence period of 30 seconds before trying this restart.
Once Node Manager restarts the server, stop it again. Now Node Manager should log a message indicating that the server will not be restarted again locally.
Testing from ComputeNode2:
Watch the local Node Manager console. After 30 seconds since the last try to restart Node Manager on ComputeNode1
, Node Manager on ComputeNode2
should prompt that the IP for ComputeNode1
is being brought up and that the server is being restarted in this compute node.
Verifying from the Administration Console
You can also verify server migration from the Administration Console as follows:
Log in to the Administration Console.
Click on Domain on the left pane.
Click the Monitoring tab and then the Migration subtab.
The Migration Status table provides information on the status of the migration.
By completing the configuration procedures described in Chapter 5, "Configuring Oracle Fusion Middleware", you have set up and configured the environment for Dept_1
, which uses ComputeNode1
and ComputeNode2
. Similarly, you can configure the environment for another department, such as Dept_2
, which uses ComputeNode3
and ComputeNode4
.
If you wish to isolate the application deployment and environment for Dept_1
from that of Dept_2
, then you must create separate IP subnets for both Dept_1
and Dept_2
over the default IP over InfiniBand (IPoIB) link. For more information about creating such subnets, see Section 3.3.3.5.1, "Application Isolation by IP Subnetting over IPoIB".
In some scenarios, the Dept_1
application may require communication with the Dept_2
application. To enable the Dept_1
application (deployed on ComputeNode1
and ComputeNode2
) to communicate with the Dept_2
application (deployed on ComputeNode3
and ComputeNode4
), you must set up IP aliasing for the two subnets to access each other. To set up this IP aliasing, see Section 3.3.3.5.2, "Allowing a Compute Node to Access Two Different Subnets Simultaneously".
When you scale out the topology, you add a new Managed Server to new compute nodes, such as ComputeNode3
. In this example procedure, WLS9
is created on ComputeNode3
and added to Dept1_Cluster1
.
Before performing the steps in this section, verify that your environment meets the following requirements:
There must be existing compute nodes, such as ComputeNode1
and ComputeNode2
, running Managed Servers configured with Oracle WebLogic Server within the topology.
The new compute node, such as ComputeNode3
, can access the existing home directories for Oracle WebLogic Server whose binaries are installed in a separate share on the Sun ZFS Storage 7320 appliance. For more information, see Section 3.4, "Shared Storage and Recommended Project and Share Structure".
In the Exalogic environment, ORACLE_HOME and WL_HOME directories are shared by multiple servers in different compute nodes. Therefore, Oracle recommends that you keep the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a compute node and "attach" an installation in the shared file system on Sun ZFS Storage 7320 appliance to the inventory, use the ORACLE_HOME/oui/bin/attachHome.sh script. To update the Middleware home list to add or remove a WL_HOME, edit the <user_home>/bea/beahomelist file. See the steps below.
You must add the Managed Servers on ComputeNode3
as follows:
Mounting Existing Oracle Fusion Middleware Home and Domain on ComputeNode3
Propagating Domain Configuration from ComputeNode1 to ComputeNode3 Using pack and unpack Utilities
Setting Environment and Superuser Privileges for the wlsifconfig.sh Script
Configuring Network Channels for Managed Servers on ComputeNode3
On ComputeNode3
, mount the existing Oracle Fusion Middleware Home, which should include the Oracle WebLogic Server installation (Located at /u01/app/FMW_Product1/Oracle/Middleware
) and the domain directory (Located at /u01/Dept_1/domains/el01cn03
), and ensure that the new compute node (ComputeNode3
) has access to this directory, just like the rest of the compute nodes in the domain (ComputeNode1
and ComputeNode2
). You must complete the following:
On the command line, run the following commands on ComputeNode3
to create the necessary mount points:
# mkdir -p /u01/common/patches # mkdir -p /u01/common/general # mkdir -p /u01/FMW_Product1/Oracle/wlserver_10.3 # mkdir -p /u01/FMW_Product1/webtier_1115 # mkdir -p /u01/Dept_1/domains/el01cn03 # mkdir -p /u01/el01cn03/dumps # mkdir -p /u01/el01cn03/general
where el01cn03
is the host name assigned to ComputeNode3
.
After creating the mount points, you must add entries for the mount points to the /etc/fstab
(Linux) or /etc/vfstab
(Solaris) file on ComputeNode3
.
On ComputeNode3
, log in as a root
user and add the following entries to the /etc/fstab
(Linux) or /etc/vfstab
(Solaris)file in a text editor, such as vi
:
Oracle Linux
el01sn01-priv:/export/common/general /u01/common/general nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072
el01sn01-priv:/export/common/patches /u01/common/patches nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072
el01sn01-priv:/export/el01cn03/dumps /u01/el01cn03/dumps nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072
el01sn01-priv:/export/el01cn03/general /u01/el01cn03/general nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072
el01sn01-priv:/export/Dept_1/domains /u01/Dept_1/domains nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072
el01sn01-priv:/export/Dept_1/jmsjta /u01/Dept_1/jmsjta nfs4 rw,bg,hard,nointr,rsize=135268,wsize=135168
el01sn01-priv:/export/Dept_1/admin /u01/Dept_1/admin nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072
el01sn01-priv:/export/FMW_Product1/wlserver_1034 /u01/FMW_Product1/wlserver_1034 nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072
el01sn01-priv:/export/FMW_Product1/webtier_1115 /u01/FMW_Product1/webtier_1115 nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072
Oracle Solaris
el01sn01-priv:/export/common/general - /u01/common/general nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072,vers=4
el01sn01-priv:/export/common/patches - /u01/common/patches nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072
,vers=4
el01sn01-priv:/export/el01cn03/dumps - /u01/el01cn03/dumps nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072
,vers=4
el01sn01-priv:/export/el01cn03/general - /u01/el01cn03/general nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072
,vers=4
el01sn01-priv:/export/Dept_1/domains - /u01/Dept_1/domains nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072
,vers=4
el01sn01-priv:/export/Dept_1/jmsjta - /u01/Dept_1/jmsjta nfs - yes rw,bg,hard,nointr,rsize=135268,wsize=135168
,vers=4
el01sn01-priv:/export/Dept_1/admin - /u01/Dept_1/admin nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072
,vers=4
el01sn01-priv:/export/FMW_Product1/wlserver_1034 - /u01/FMW_Product1/wlserver_1034 nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072
,vers=4
el01sn01-priv:/export/FMW_Product1/webtier_1115 - /u01/FMW_Product1/webtier_1115 nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072
,vers=4
Note:
In the above entries, el01sn01-priv
is used as the example host name of the Sun ZFS Storage 7320 appliance. You can also use the IPoIB IP address assigned to the storage appliance.
Save the file and exit.
On the command line, run the following commands as a root
user on ComputeNode1
and ComputeNode2
to create the necessary mount points:
# mkdir -p /u01/Dept_1/admin/el01cn03/nodemanager # mkdir -p /u01/Dept_1/jmsjta/base_domain/Dept1_Cluster1/jms # mkdir -p /u01/Dept_1/jmsjta/base_domain/Dept1_Cluster1/tlogs
To mount the volumes, complete the following steps:
On ComputeNode3
, ensure that the mount entries are added to the /etc/fstab
file correctly.
Run the mount -a
command on ComputeNode3
to mount the volumes.
You have created the domain (base_domain
) on ComputeNode1
. You must propagate the domain configuration to ComputeNode3
as follows:
Run the pack
command on ComputeNode1
to create a template pack using the following commands:
ComputeNode1> cd /u01/app/FMW_Product1/Oracle/Middleware/wlserver_10.3/common/bin
ComputeNode1> ./pack.sh -managed=true -domain=/u01/Dept_1/domains/el01cn01/base_domain -template=basedomaintemplate.jar -template_name=basedomain_template
Run the unpack
command on ComputeNode3
to unpack the template.
ComputeNode3> cd /u01/app/FMW_Product1/Oracle/Middleware/wlserver_10.3/common/bin ComputeNode3> ./unpack.sh -/u01/Dept_1/domains/el01cn03/base_domain -template=basedomaintemplate.jar
Complete the procedure described in Section 5.7, "Configuring Java Node Manager" and ensure that you make the following changes:
New Node Manager directory: u01/Dept_1/admin/el01cn03/nodemanager
Listen Address= 192.168.10.3
Note:
This IP address is the Bond0
IP address of ComputeNode3
.
Domain Home= /u01/Dept_1/domains/el01cn03/base_domain
DomainsFile
= /u01/Dept_1/admin/el01cn03/nodemanager/nodemanager.domains
LogFile
= /u01/Dept_1/admin/el01cn03/nodemanager/nodemanager.log
In addition, complete the following steps on ComputeNode3
:
Start WLST as follows:
ComputeNode1> cd /u01/app/FMW_Product1/Oracle/Middleware/wlserver_10.3/common/bin ComputeNode1> ./wlst.sh
Use the connect
command to connect WLST to a WebLogic Server instance, as in the following example:
wls:/offline> connect('username','password','t3://ADMINVHN1:7001')
Once you are in the WLST shell, run nmEnroll
using the following syntax:
nmEnroll([domainDir], [nmHome])
For example,
nmEnroll('/u01/Dept_1/domains/el01cn03/base_domain', '/u01/Dept_1/admin/el01cn03/nodemanager')
Running nmEnroll
ensures that the correct Node Manager user and password token are supplied to each Managed Server. Once these are available for each Managed Server, you can use nmConnect
in a production environment.
Disconnect WLST from the WebLogic Server instance by entering disconnect()
, and exit by entering exit()
to exit the WLST shell.
Set environment and superuser privileges for the wlsifconfig.sh
script:
Ensure that your PATH
environment variable includes these files:
Table 9-2 Files Required for the PATH Environment Variable
File | Located in this directory |
---|---|
|
|
|
|
|
|
Run wlsifconfig.sh -listif bond0
script and verify that your network interface and the netmask (for example, 255.255.255.192
for the Dept_1
subnet used in the example configuration) are correct.
Grant sudo configuration for the wlsifconfig.sh script
.
Note:
Ensure that you run sudo /sbin/ifconfig
or sudo /sbin/arping
. Running this command will disable any password input prompt.
Configure sudo
to work without a password prompt.
For security reasons, sudo should be restricted to the subset of commands required to run the wlsifconfig.sh
script. For example, to set the environment and superuser privileges for the wlsifconfig.sh
script, complete these steps:
Grant sudo privilege to the WebLogic user (weblogic
) with no password restriction, and grant execute privileges on the /sbin/ifconfig and /sbin/arping binaries.
Make sure that the script is executable by the WebLogic user (weblogic). The following is an example of an entry inside /etc/sudoers granting sudo execution privilege for weblogic
and also over ifconfig
and arping
:
weblogic ALL=NOPASSWD: /sbin/ifconfig,/sbin/arping
Note:
Contact your system administrator for the sudo and system rights as appropriate to this step.
Create a new machine for ComputeNode3
that will be used, and add the machine to the domain. Complete the following steps:
Log in to the Oracle WebLogic Administration Console.
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit.
In the left pane of the Console, expand Environment and select Machines.
The Summary of Machines page is displayed.
Click New.
The Create a New Machine page is displayed.
Enter the following details:
Enter ComputeNode3 as the name for the new machine in the Name field.
Select UNIX from the drop-down list in the Machine OS field.
Click Next.
The Node Manager Properties page is displayed.
Enter the following details:
Select Plain from the drop-down list in the Type field.
Listen Address: 192.168.10.3
Note:
This address is the example BOND0
IP address of ComputeNode3
.
Listen Port: 5556
Node Manager Home: /u01/Dept_1/admin/el01cn03/nodemanager
Click Finish.
The new machine is displayed in the Machines table.
You can create a Managed Server on ComputeNode3
in an existing domain (base_domain) which is shared by ComputeNode1
and ComputeNode2
. Complete the following:
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit.
In the left pane of the Console, expand Environment, and then Servers.
The Summary of Servers page is displayed.
In the Servers table, click New.
The Create a New Server page is displayed.
Enter the following details:
Enter WLS9 as the name of the server in the Name field.
Listen Address: 10.0.0.9
Note:
This address is the floating IP address assigned to the new Managed Server. This address uses the BOND0
interface. Ensure that the address is configured before you start the new Managed Server.
Listen Port: 7003
Check Yes, make this server a member of an existing cluster, and then select Dept1_Cluster1.
Click Next.
Review the configuration options you have chosen, and click Finish.
Click Activate Changes.
You must associate the Managed Servers you created to the new machine (ComputeNode3
). Complete the following steps:
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit.
In the left pane of the console, expand Environment, and then Servers.
The Summary of Servers page is displayed.
Select WLS9.
The Settings for WLS9 page is displayed.
Select Configuration, and then General.
In the Machine field, select ComputeNode3.
Click Save.
To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
For the Managed Server on ComputeNode3
(WLS9
), you can create the following network channels:
Create the network channel for the Managed Server on ComputeNode3
, by following the instructions described in Section 5.12.3.1, "HTTP Client Channel" and enter the required properties as described in Table 9-3.
Create the network channel for the Managed Server on ComputeNode3
, by following the instructions described in Section 5.12.3.2, "T3 Client Channel" and enter the required properties as described in Table 9-4.
Table 9-4 Network Channels Properties
Managed Server | Name | Protocol | Listen Address | Listen Port |
---|---|---|---|---|
WLS9 |
T3ClientChannel9 |
t3 |
10.1.0.9 |
7003 |
Note:
These IPs use the Bond1
interface.
Configure the persistent store for the new server. This should be a location visible from other compute nodes, as recommended in Section 3.4, "Shared Storage and Recommended Project and Share Structure."
From the Administration Console, select the Server_name, and then the Services tab. Under Default Store, in Directory, enter the path to the folder where you want the default persistent store to store its data files.
Start Node Manager on the new compute node (ComputeNode3
). To start Node Manager, use the installation in shared storage from the existing compute nodes (ComputeNode1
or ComputeNode2
), and start Node Manager by passing the host name of the new node as a parameter as follows:
ComputeNode3> /u01/Dept_1/admin/el01cn03/nodemanager/startNodeManager.sh
Start and test the new Managed Server from the Oracle WebLogic Server Administration Console.
Shut down all the existing Managed Servers in the cluster.
Ensure that the newly created Managed Servers WLS9
is running.
You must configure server migration targets. To do so, complete the following:
You must configure server migration targets for the Dept1_Cluster1 cluster. Configuring Cluster Migration sets the DataSourceForAutomaticMigration
property to true. Follow the steps below to configure cluster migration in a cluster:
Log in to the Oracle WebLogic Server Administration Console, using the following URL:
http://ADMINVHN1:7001/console
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit.
In the left pane of the Console, expand Environment and then select Clusters.
The Summary of Clusters page is displayed.
Click Dept1_Cluster1 for which you want to configure migration in the Name column of the table.
The Settings for Dept1_Cluster1 page is displayed.
Click the Migration tab.
Enter the following details:
For the Candidate Machines For Migratable Servers, select ComputeNode2 under Available, and then click the right arrow.
For Migration Basis, select Database.
For Data Source For Automatic Migration, select gridlink. This is the data source you created in Section 7.6.2, "Creating a GridLink Data Source on Dept1_Cluster1".
Click Save.
Click Activate Changes.
Set the Managed Servers on ComputeNode3
for server migration as follows:
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit.
In the left pane of the Console, expand Environment and then select Servers.
Select WLS9.
The Settings for WLS9 is displayed.
Click the Migration tab.
In the Migration Configuration page, enter the following details:
Select Automatic Server Migration Enabled. This enables the Node Manager to start a failed server on the target node automatically.
For Candidate Machines, select ComputeNode3 under Available and click the right arrow.
For JMS Service Candidate Servers, select all of the Managed Servers (Table 5-2) in ComputeNode2
and click the right arrow.
Select Automatic JTA Migration Enabled. This enables the automatic migration of the JTA Transaction Recovery System on this server.
For JTA Candidate Servers, select all of the Managed Servers (Table 5-2) in ComputeNode2
and click the right arrow.
Click Save.
Click Activate Changes.
Restart the Administration Server and the servers for which server migration has been configured.
To restart the Administration Server, use the procedure in Section 5.8, "Restarting the Administration Server on ComputeNode1."
Tip:
Click Customize this table in the Summary of Servers page, move Current Machine from the Available Window to the Chosen window to view the machine on which the server is running. This will be different from the configuration if the server gets migrated automatically.
Test server migration for this new server. Follow these steps from the node where you added the new server:
Abruptly stop the WLS9
Managed Server by running kill -9 <pid>
on the PID of the Managed Server. You can identify the PID of the node using ps -ef | grep WLSn
.
In the Node Manager Console, you should see a message indicating that WLS9's floating IP has been disabled.
Wait for the Node Manager to try a second restart of WLSn. Node Manager waits for a fence period of 30 seconds before trying this restart.
Once Node Manager restarts the server, stop it again. Now Node Manager should log a message indicating that the server will not be restarted again locally.
To scale down the topology by deleting the new Managed Servers on ComputeNode3, complete the following steps:
When you delete a Managed Server, WebLogic Server removes its associated configuration data from the domain's configuration file (config.xml
). All of the configuration data for the server will be deleted. For example, any network channels that you created for the server are deleted, but applications and EJBs that are deployed on the server will not be deleted.
Note:
Ensure that you have stopped the Managed Server, before you delete it.
To delete a Managed Server such as WLS9
:
Log in to the Oracle WebLogic Administration Console.
If you have not already done so, click Lock & Edit in the Change Center.
In the left pane of the Console, select Environment, and then Servers.
The Summary of Servers page is displayed.
Select the check box next to WLS9 in the Names column of the table and click Delete.
Confirm your deletion request.
Click Activate Changes.
You must delete the machine (ComputeNode3
) on ComputeNode3
by completing the following steps:
Log in to the Oracle WebLogic Administration Console.
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit.
In the left pane of the Console, expand Environment and select Machines.
The Summary of Machines page is displayed.
Select the check box next to ComputeNode3 and click Delete.
A dialog displays asking you to confirm your deletion request.
Click Yes to delete the machine.
To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
This section describes backup and recovery recommendations for Oracle Exalogic users.
It contains the following topics:
Important Artifacts to Back Up
Table 9-5 lists the WebLogic artifacts that you should back up frequently in the Oracle Exalogic enterprise deployment.
Table 9-5 WebLogic Artifacts to Back Up in the Oracle Exalogic Enterprise Deployment
Type | Location |
---|---|
JMS persistent messages and JTA tlogs |
|
Read-write home directories, such as OS syslogs and crash dump |
|
Read-only home directories, such as ORACLE_HOME, OS patches, and so on. |
User-defined share on the Sun ZFS Storage 7320 appliance |
Domain Home directories of all Exalogic compute nodes. Each compute node has its own Domain Home directory. Backing up the configuration data is important to preserve the domain configuration changes. |
User-defined share on the Sun ZFS Storage 7320 appliance |
For more information about backup and recovery, refer to the following sections, in the Oracle Fusion Middleware Administrator's Guide.
Backing Up Your Environment
Recovering Your Environment
boot.properties File for Restored Managed Servers
If a Managed Server is restored from a backup, you must re-create the boot.properties
file located in /u01/Dept_1/domains/el01cn01/base_domain/servers/AdminServer/security
for the restored Managed Server on its compute node.
To do this, perform these steps:
Create the above directory if it does not already exist.
In a text editor, create a file called boot.properties in the security
directory created in the previous step, and enter the following lines in the file:
username=<admin_user> password=<password>
Start the Managed Server.
This section provides recommendations and considerations for patching Oracle software and updating firmware in the Oracle Exalogic environment.
It contains the following sections:
If you are an Oracle Linux user, Oracle Linux is pre-installed on each of the Exalogic compute nodes. Any operating system patches reside on the Sun ZFS Storage 7320 appliance, which is used by all compute nodes.
Oracle recommends that you patch the Oracle Linux operating system installed on compute nodes simultaneously. This practice helps you maintain the operating system environment. Therefore, you should patch the operating system at the Exalogic machine level - all compute nodes at once.
If you are updating Oracle Solaris installed on your Exalogic compute nodes, Oracle recommends that you update the Oracle Solaris operating system installed on compute nodes simultaneously. This practice helps you maintain the operating system environment. Therefore, you should update the operating system at the Exalogic machine level - all compute nodes at once.
Updates to Oracle Solaris can be downloaded from the support repository, which is available at the following URL:
http://pkg.oracle.com/solaris/release
This URL is restricted. It can only be accessed by the IPS pkg
commands. Ensure that you register and obtain the key and certificate from https://pkg-register.oracle.com
before you download any Oracle Solaris 11 Express Support Repository Updates (SRUs).
For more information about the support repository and SRUs (support repository updates), see "Support FAQ" on the Oracle Solaris 11 Express Overview" and "Support Package Repositories Explained".
Oracle Exalogic Machine uses the Sun ZFS Storage 7320 appliance that allows all Oracle WebLogic instances in the Oracle Exalogic system, including instances running in different Oracle WebLogic Server domains, to share the same Oracle WebLogic Server installation.
Topologies using shared installations across Oracle WebLogic Server domains and physical servers offer some advantages over topologies using dedicated installations per domain or physical server. Shared installation topologies create fewer sets of product binaries to be managed, simplify the mapping of Oracle WebLogic Server instances to the installation being used, and enforce maximum consistency of Oracle WebLogic Server versions and maintenance levels in the Oracle Exalogic system. In some environments, shared installation topologies may result in management efficiencies.
However, in some scenarios, you may require multiple Oracle WebLogic Server installations within the Oracle Exalogic system, each dedicated to specific WebLogic Server domains or to compute nodes. Topologies with multiple dedicated installations provide more management flexibility, particularly when maintenance considerations are important.
Applications running in different WebLogic Server domains may have different maintenance requirements. The frequency of their updates may vary, and the update requirements may affect different functional areas of the WebLogic Server product, resulting in diverse patch requirements. They may also host applications from different departments or business units within the same organization, which require that their applications and systems, including the Oracle WebLogic Server products being used in those applications, are isolated from other applications to minimize cross-application dependencies. Therefore, Oracle recommends that you evaluate your specific WebLogic Server maintenance requirements when determining the installation topology that will be used within the Oracle Exalogic system.
One-off patches are provided to address specific functional issues within the WebLogic Server product. One-off patches are generally applied only when required to resolve problems observed in the user's environment. While WebLogic Server patches can be applied on a per-domain basis (or on a more fine-grained basis), Oracle recommends that one-off patches be applied on an installation-wide basis. One-off patches applied to a WebLogic Server installation using this recommended practice affect all domains and servers sharing that installation.
Maintenance releases are applied on an installation-wide basis, and once applied, will affect all domains and servers sharing that installation. Oracle recommends that you create a unique Oracle WebLogic Server installation for each set of domains and compute nodes that must be maintained independently of their peers in the Oracle Exalogic system.
You should evaluate your specific requirements for maintaining domains and compute nodes within the Oracle Exalogic system and how to group (or isolate) domains and compute nodes from a maintenance perspective.
For example, you can group domains and compute nodes based on the following:
Departments or business units they support
Required service levels
Current and future requirements for isolating domains
Historical practice
After you arrive at a logical group of domains and compute nodes, you can set up an Oracle WebLogic Server installation for each group of domains and compute nodes that must be maintained independently.
To patch an Oracle WebLogic Server installation, you must use Smart Update. For more information, see Oracle Smart Update Installing Patches and Maintenance Packs.
Oracle recommends that you patch software or update firmware for the storage appliance, switches, and ILOM in the Exalogic environment on a system-wide basis.