2 Preparing for Zero Downtime Patching

Before configuring a patching workflow, ensure that you perform the required preliminary steps such as installing and patching a new Oracle home, installing a new Java version, or installing updated applications on each node. There are also known restrictions to consider before preparing for and creating a ZDT patching workflow.

ZDT Patching Restrictions

For the rollout orchestration to be successful, you must keep in mind certain restrictions before you configure a patching workflow.

Prior to preparing for and creating a ZDT patching workflow, consider the following restrictions:

  • The Managed Servers that are included in the workflow must be part of a cluster, and the cluster must span two or more nodes.

  • If you want to roll out an update to the Managed Servers without targeting and updating the Administrations Server, then ensure that the Administration Server is on a different node than any of the Managed Servers being updated.

  • If you are updating to a patched Oracle home, the current Oracle home must be installed locally on each node that will be included in the workflow. Although it is not required, Oracle also recommends that the Oracle home be in the same location on each node.

  • When you are rolling out a new Oracle home using WLST commands, you must specify the path to the JAR archive that contains the Oracle home to roll out. Specifying a local directory is not supported when you are rolling out a new Oracle home. Only if you are rolling back to a previous Oracle home, you can specify the path to the local directory which must be the backup Oracle home directory from the previous rollout that you want to roll back to.

  • If Managed Servers on a node belong to different clusters and those clusters share the same Oracle home, then if you include one of those clusters in a workflow, you must also include the other cluster in the workflow. For example, if Node 1 has Managed Server 1 in Cluster 1 and Managed Server 2 in Cluster 2, and both Cluster 1 and Cluster 2 share the same Oracle home, then if you include Cluster 1 in the workflow, you must also include Cluster 2. This applies to Java home, Oracle home and application update rollouts.

  • The domain directory must reside outside of the Oracle home directory.

  • (Windows only) When you use the WebLogic Scripting Tool (WLST) to initiate a rollout of a new Oracle home, you cannot run WLST from any Oracle home that will be updated as part of the workflow. Instead, use one of the following options:

    • Run WLST from an Oracle home on a node that will not be included in the workflow. This Oracle home must be the same version as the Oracle home that is being updated on other nodes.

    • Run WLST from another Oracle home that is not part of the domain being updated. This Oracle home must be the same version as the Oracle home that is being updated. It can reside on any node, including the Administration Server node for the domain being updated.

    • Use the WebLogic Server Administration Console to initiate the workflow.

  • (Windows only) Windows file locks may pose problems during the ZDT rollout operations. You must attempt to rectify these common file handle lock issues before executing a rollout on Windows to avoid rollout failure:

    • When you deploy an application by using the Administration Console, the Administration Server may hold a lock on the application source file. If this lock is not released, it could prevent subsequent application rollouts from functioning properly. To release the lock, you must log out of the Administration Console anytime after deploying the application and before initiating the rollout.

    • Using the WLST client on the Administration Server will cause the Oracle home directory to be locked. This will cause any rollout on that node, including a domain rollout to fail. To avoid this, use a WLST client installed on a node that is not targeted by the rollout, or initiate the rollout using the Administration Console.

    • Opening command terminals or applications residing in any directory under Oracle home may cause a file lock. As a result, you will be unable to update that particular Oracle home.

    • Any command terminal or application that references the application source file or a JAR file may cause a file lock, making it impossible to update that particular application.

Preparing to Migrate Singleton Services

ZDT rollouts provide support to migrate singleton services, such as JMS and JTA, using the service migration feature of WebLogic Server. For better control of service migration during a rollout, you can also use the JSON file-based migration option that ZDT supports.

All ZDT rollouts require a restart of the servers that are included in the rollout. One feature of the rollout is detection and handling of singleton services, such as Java Transaction API (JTA) and Java Messaging Service (JMS). To make these singleton services highly available during the rollout operation, ZDT patching takes advantage of the service migration mechanisms supported by WebLogic Server. For singleton services in your environment, service migration can be configured in either of the following ways:

  • For migrating a singleton service that is configured using migratable targets, the service migration is configured as described in Service Migration in Administering Clusters for Oracle WebLogic Server. If a service is configured using migratable targets and the migration policy is set to exactly-once, then the service automatically migrates during the graceful shutdown of a server. If, however, the migration policy for a service is manual or failure-recovery, then you must take steps to ensure that the service is migrated safely during server shutdown. To achieve this, you must define the migration properties in the JSON file as described in Creating a JSON File for Migrating Singleton Services.

    You must bear in mind the following issues restrictions when migrating singleton services that is configured using migratable targets:

    • The data store for JMS servers must reside at a shared location to be used by the members of the cluster, without which the user might experience loss of messages. For more information, see Using Shared Storage in Fusion Middleware High Availability Guide.

    • The ClusterMBean must be configured with the setServiceActivationRequestResponseTimeout method and its value must be set depending on the time taken for the migration to succeed.

    • The JNDI NameNotFoundException is returned during lookup for JMS connection factories and destinations. This is a known limitation. For information about this limitation and its workaround, see note 1556832.1 at My Oracle Support.

    • As services migrate during the rollout, the JNDI lookup for JMS connection factories and destinations fail. In such cases of server failure, JMS applications attempt to reconnect to another available server for non-deterministic time till the migration succeeds. For more information about this feature, see Recovering from a Server Failure in Developing JMS Applications for Oracle WebLogic Server.

  • For migrating a singleton service that is configured using the JMS cluster configuration, the service migration is configured (depending on your cluster type) as described in Simplified JMS Cluster and High Availability Configuration in Administering JMS Resources for Oracle WebLogic Server. If a service is configured using the JMS Cluster configuration, then the migration-policy must be set to Always to enable the automatic migration of services during the graceful shutdown of a server. If the migration-policy is On-Failure or Off, then you must take steps to ensure that the service is migrated safely during server shutdown. You must also ensure that the automatic restart-in-place option is explicitly disabled when using this simplified HA service migration model.

    .

Note:

ZDT rollout allows you to specify whether a singleton service should be migrated before shutting down during patching. However, during the rollout operation, the user is not allowed to specify the migration of servers on the same machine. This is because, all servers on a machine experience shutdown during a rollout which may cause unavoidable downtime for users. Ensure that you always specify migration of services to a server on a different machine, failing which the rollout might fail.

Service migration involves shutting down one or more singleton services on the first server that is being rolled out. This means that the service is made available on the second server while rollout is in progress. Upon successful completion of the rollout, the services are migrated back to the newly patched first server. Since this process involves restarting of singleton services, the users can expect a brief downtime of services when the service is shut down on the first server and has not fully started on the second server. This would render the service unavailable and applications may experience a brief outage. The period of downtime of services may depend on factors including, hardware (both machine and network) performance, cluster size, the server startup time, and persistent message backlog in case of JMS.

Creating a JSON File for Migrating Singleton Services

To ensure that the singleton service is migrated safely during server shutdown, you must perform the following tasks:

  • Create a JSON file to define migration properties for such services, as described in this section

  • Configure the rollout to use the JSON file as described in Configuring and Monitoring Workflows.

The JSON file must start with the following line:

{"migrations":[

Each singleton service migration that you need to migrate is defined using the parameters described in the following table.

Parameter Description

source

The name of the source server from which the service is to be migrated. This parameter is required.

destination

For migrationType of jms, jta, or all, the name of the destination server to which the service is to be migrated.

For migrationType of server, the name of another machine (node) in the domain on which Node Manager is running.

This parameter is required if the migrationType is jms, jta, server, or all.

migrationType

The type of migration, which can be one of the following types:

  • jms — Migrate all JMS migratable targets from the source server to the destination server.

  • jta — Migrate all JTA services from the source server to the destination server.

  • server — Invoke Whole Server Migration to perform a server migration. The destination must be a machine (node) on which Node Manager is running.

  • all — Migrate all services (for example, JTA and JMS) from the source server to the destination server.

  • none — Disable service migration from the source server. If you specify this type, failback and destination are not needed.

failback

If set to true, a failback operation is performed. Failback restores a service to its original hosting server, the server on which it was running before the rollout.

The default value is false (no failback).

Note: A JTA service automatically fails back when it is invoked for migration. Therefore, do not use the failback option for JTA services, as it does not apply to them. The rollout fails if you specify the failback option.

The following sample JSON file shows how to define various migration scenarios.

    {"migrations":[           

# Migrate all JMS migratable targets on server1 to server2. Perform a failback
    {
    "source":"server1",                
    "destination":"server2",
    "migrationType":"jms",
    "failback":"true"
    },

# Migrate only JTA services from server1 to server3. Note that JTA migration
# does not support the failback option, as it is not needed.
    {
    "source":"server1",
    "destination":"server3",
    "migrationType":"jta"
    },

# Disable all migrations from server2
    {
    "source":"server2",
    "migrationType":"none" 
    },
    {

# Migrate all services (for example, JTA and JMS) from server 3 to server1 with
# no failback
    "source":"server3",
    "destination":"server1",
    "migrationType":"all"
    },
 
# Use Whole Server Migration to migrate server4 to the node named machine 5 with
# no failback
    {
    "source":"server4",
    "destination":"machine5",
    "migrationType":"server"
    }
 
    ]}

Preparing to Roll Out a Patched Oracle Home

Before rolling out a patched Oracle home to your Managed Servers, you must create an Oracle home archive and distribute it to each node. Use OPatchAuto to clone your existing Oracle home or manually create a second Oracle home and use the OPatch utility to apply patches to it.

There are two ways to prepare for rolling out a patched Oracle home to your Managed Servers:

In both cases, the preparation process does not require you to shut down any of your Managed Servers, so there is no effect on the availability of your applications.

Note:

If your domain includes Oracle Fusion Middleware products other than Oracle WebLogic Server (such as Oracle SOA Suite or Oracle WebCenter), and you have patched those applications in your Oracle home, if you want to preserve currently active sessions while doing the rollout, ensure that the patched versions are compatible with ZDT patching. For example, the applied patches should have limited changes to session shape and should be backward-compatible with other Oracle Fusion Middleware products that are running in the domain.

Creating a Patched Oracle Home Archive Using OPatchAuto

This section describes how to create a clone of your existing Oracle home, patch it, and create an archive of the patched Oracle home using the OPatchAuto tool. Before you can apply any patches, you must first download them to your patch_home directory using OPatch.

To create a patched Oracle home archive, enter the following commands. You must run the opatchauto apply command from the ORACLE_HOME from which you want to create the image. This command creates a clone of your unpatched Oracle home, applies the patches in the specified patch_home directory, and then creates the patched archive.

cd ORACLE_HOME/OPatch/auto/core/bin
opatchauto.sh apply patch_home -create-image -image-location path -oop

The following table describes the parameters in the opatchauto applycommand:

Parameter Description

patch_home

The OPatch $PATCH_HOMEdirectory where the patches you want to apply are stored

-create-image

Indicates that you want to create an image of the Oracle home directory. The image will include the patches in patch_home.

-image-location path

Specify the full path and file name of the image JAR file to create. For example:

-image-location /u01/images/OH-patch1.jar

-oop

Indicates that this is an out-of-place patching archive

Distributing the Patched Archive to Each Node Using OPatchAuto

After you create a patched archive, use OPatchAuto to distribute the archive to each node that will be included in the Oracle home patching workflow.

To distribute the archive, use the following commands:

cd ORACLE_HOME/OPatch/auto/core/bin
opatchauto.sh apply -plan wls-zdt-push-image -image-location path 
-wls-zdt-host adminserver:port -wls-zdt-target target 
-wls-zdt-remote-image path -wallet path -walletPassword password

The following table describes the parameters in the opatchauto applycommand:

Parameter Description

-plan

Indicates the type of operation to be performed by opatchauto apply. For distributing a patched Oracle home for ZDT, always specify wls-zdt-push-image as the value for this parameter.

-image-location path

Specify the full path and file name of the image JAR file to distribute. For example:

-image-location /u01/images/OH-patch1.jar

-wls-zdt-host adminserver:port

Specify the Administration Server hostname and port number for the domain to which you are distributing the archive. The archive will be distributed to this node.

-wls-zdt-target target

Specify a cluster or a comma-separated list of clusters that will be included in the rollout. The archive will be distributed to all nodes on which these clusters are configured.

-wls-zdt-remote-image path

The full path to the archive file you want to create on each node to be included in the ZDT rollout. This does not have to be the same file name as the original archive. For example:

-wls-zdt-remote-image /u01/images/rollout-OH-image.jar

-wallet path

The full path to a wallet directory that was created using configWallet.sh or configWallet.cmd. For example:

-wallet $HOME/wallet

-walletPassword password

The password for the specified wallet, if needed. For example:

-walletPassword mypassword

After distributing the patched archive, you are ready to create a workflow that includes patching your Oracle home. See Configuring and Monitoring Workflows.

Note:

If you want to also update your Java version or applications using the same patching workflow, then perform the preparation steps for those upgrades before you create the workflow.

Creating a Second Oracle Home

To manually create a patched Oracle home, you must first create a copy of your existing Oracle home by using the copyBinary and pasteBinary commands. When using these commands, you must keep in mind that the value of options specified must not contain a space. For example, on Windows, you cannot pass the following as a value to the -javaHome option:

C:\Program Files\jdk

Note:

Oracle recommends that you create and patch the second Oracle home on a nonproduction machine so that you can test the patches you apply, but this is not required. However, you must perform the following steps on the node where you will patch the new Oracle home. The Oracle home on that node must be identical to the Oracle home you are using for your production domain.

To create the second Oracle home to which you will apply patches:

  1. Change to the following directory, where ORACLE_HOME is the Oracle home that you want to patch.
    cd ORACLE_HOME/oracle_common/bin
    
  2. Execute the following command, where archive is the full path and file name of the archive file to create, and oracle_home is the full path to your existing Oracle home. Note that JAVA_HOME must be defined as the Java home that was used for your Oracle home installation:

    UNIX

    ./copyBinary.sh -javaHome $JAVA_HOME -archiveLoc archive -sourceOracleHomeLoc oracle_home
    

    Windows

    copyBinary.cmd -javaHome %JAVA_HOME% -archiveLoc archive -sourceOracleHomeLoc oracle_home
    

    For example, the following command creates the Oracle home archive wls1221.jar in network location /net/oraclehomes/ using the Oracle home located at /u01/oraclehomes/wls1221:

    ./copyBinary.sh -javaHome $JAVA_HOME -archiveLoc /net/oraclehomes/wls1221.jar -sourceOracleHomeLoc /u01/oraclehomes/wls1221
    
  3. Execute the following command to create the second Oracle home, where archive is the full path and file name of the archive file you created, and patch_home is the full path to the new Oracle home to which you will apply patches. Note that JAVA_HOME must be defined as the Java home that was used for your original Oracle home installation:

    UNIX

    ./pasteBinary.sh -javaHome $JAVA_HOME -archiveLoc archive -targetOracleHomeLoc patch_home
    

    Windows

    pasteBinary.cmd -javaHome %JAVA_HOME% -archiveLoc archive -targetOracleHomeLoc patch_home
    

For example, the following command creates the Oracle home wls1221_patched in /u01/oraclehomes/ using the archive /net/oraclehomes/wls1221.jar:

./pasteBinary.sh -javaHome $JAVA_HOME -archiveLoc /net/oraclehomes/wls1221.jar -targetOracleHomeLoc /u01/oraclehomes/wls1221_patched

Applying Patches to the Second Oracle Home

To patch the second Oracle home, use the OPatch tool to apply individual patches, bundle patches, security patch updates, or patch set updates to the second, offline Oracle home. Prior to applying a particular patch or group of patches, ensure that all prerequisite patches have already been applied.

For detailed information about how to prepare for and patch an Oracle home using OPatch, see Patching Your Environment Using OPatch in Patching with OPatch.

Creating an Archive and Distributing It to Each Node

After you have created the patched Oracle home, use the following steps to create an Oracle home archive and copy it to each node that will be involved in the rollout:

  1. Change to the following directory, where ORACLE_HOME is the patched Oracle home that you created.
    cd ORACLE_HOME/oracle_common/bin
    
  2. Execute the following command, where archive is the full path and file name of the archive file to create, and patched_home is the full path to the patched Oracle home you created. Note that JAVA_HOMEmust be defined as the Java home that was used for your current Oracle home installation.

    UNIX

    ./copyBinary.sh -javaHome $JAVA_HOME -archiveLoc archive -sourceOracleHomeLoc patched_home
    

    Windows

    copyBinary.cmd -javaHome %JAVA_HOME% -archiveLoc archive -sourceOracleHomeLoc patched_home
    

    For example, the following command creates the Oracle home archive wls1221.11.jar in network location /net/oraclehomes/ using a patched Oracle home located at /01/oraclehomes/wls1221_patched:

    ./copyBinary.sh -javaHome $JAVA_HOME -archiveLoc /net/oraclehomes/wls_1221.11.jar -sourceOracleHomeLoc /u01/oraclehomes/wls1221_patched
    
  3. On each node that will be included in the patching workflow, copy the archive file to the parent folder of the Oracle home that you want to replace. For example, if the archive is in network location /net/oraclehomes/wls_1221.11.jar and the Oracle home to be replaced is located in /u01/oraclehomes/wls1221:
    cp /net/oraclehomes/wls1221.11.jar /u01/oraclehomes/
    

    If you are copying to a large number of nodes, you can use third-party software distribution applications to perform this step.

After completing these steps, you are ready to create a workflow that includes patching your Oracle home. See Configuring and Monitoring Workflows.

Note:

If you want to also update your Java version or applications using the same patching workflow, then perform the preparation steps for those upgrades before you create the workflow.

Preparing to Upgrade to a New Java Version

Before upgrading to a new Java version, you must copy the new Java version to each node you want to include in the upgrade. Before installing the new Java version, there are certain conditions that must be met.

Preparation for upgrading to a new version of Java does not require you to shut down Managed Servers, so there will be no interruption to application availability.

To upgrade to a new version of Java:

  1. Prior to installing the new Java version, ensure that Node Manager and the Managed Servers are running on all nodes on which you plan to install the new version. This prevents the Java installer from changing the existing Java home path. However, you do not need to have the Node Manager running on the node on which the Administration Server is running.
  2. On each node to be included in the upgrade, install the new Java version to the same path on each node. The full path to the new Java version must be the same on each node for the upgrade to be successful.

After copying the new Java version to each node, you are ready to create a workflow that includes upgrading to a new Java home. See Configuring and Monitoring Workflows.

Preparing to Update to New Application Versions

Before rolling out an application update, the new application version is distributed to all affected nodes depending on the staging mode you used when you staged the application. You must create a JSON file to specify the properties of applications that require an update.

This section describes how to prepare for updating to new applications using a ZDT workflow. It contains the following sections:

The Effects of Staging Modes

Applications deployed across Managed Servers, partitions, or resource groups can be deployed using one of three staging modes: stage mode, no-stage mode, and external-stage mode. The selected mode indicates how the application will be distributed and kept up-to-date.

How you prepare for an application update workflow depends on the mode you used when you staged the application.

Staging Mode Required Preparation and Result

Stage

Place a copy of the updated application directory on the domain's Administration Server.

Result: The workflow will replace the original application directory on the Administration Server and WebLogic Server will copy it to each Managed Server.

No-stage

Place a copy of the updated application directory on each node that will be affected. This directory must be in the same location on each node.

Result: The workflow will update each node in turn by replacing the existing application directory with the updated application directory, and will move the original application directory to the specified backup location.

External stage

Place a copy of the updated application directory on each node that will be affected. This directory must be in the same location on each node.

Result: The workflow will detect that the application is an external-stage application, figure out the correct path for the stage directory for each Managed Server on the node, copy the updated application to that location, and move the original application to the specified backup location.

For detailed information about the various staging modes, see Staging Mode Descriptions and Best Practices in Deploying Applications to Oracle WebLogic Server.

Creating an Application Update JSON File

You can update one or more applications in your domain, partition, or resource groups with a single workflow. Application updates are accomplished by creating a JSON file that, for each application, defines:

  • The application name (applicationName)

  • The path and file name for the updated application archive (patchedLocation)

  • The path and file to which you want to back up the original application archive (backupLocation).

  • The partition name. This is applicable only if you are updating an application deployed to a partition.

  • The resource group template name. This is applicable only if you are updating an application deployed to a resource group.

Note:

Oracle recommends that you avoid using backslash (Windows) while specifying the paths in the JSON file. This is because these paths are interpreted by Java and a backslash may trigger a different character representation.

When configuring the workflow either using WLST or the WebLogic Server Administration Console, you specify the file name of the JSON file to use for the update.

The following example shows the structure of a JSON file that is intended to update two applications, MyApp and AnotherApp, to a new version. You can use a single JSON file to update as many applications as necessary.

{"applications":[
{
"applicationName":"MyApp",
"patchedLocation":"/u01/applications/MyAppv2.war",
"backupLocation": "/u01/applications/MyAppv1.war"
},
{
"applicationName":"AnotherApp",
"patchedLocation":"/u01/applications/AnotherAppv2.war",
"backupLocation": "/u01/applications/AnotherAppv1.war"
}
]}

After copying the updated application to all required locations and creating the JSON file, you are ready to create a workflow that includes application updates. See Configuring and Monitoring Workflows.