20 Common Configuration and Management Tasks for an Enterprise Deployment

The configuration and management tasks that may need to be performed on the enterprise deployment environment are detailed in this section.

Configuration and Management Tasks for All Enterprise Deployments

These are some of the typical configuration and management tasks you are likely need to perform on an Oracle Fusion Middleware enterprise deployment.

Verifying Appropriate Sizing and Configuration for the WLSSchemaDataSource

WLSSchemaDataSource is the common datasource that is reserved for use by the FMW components for JMS JDBC Stores, JTA JDBC stores, and Leasing services. WLSSchemaDataSource is used to avoid contention in critical WLS infrastructure services and to guard against dead-locks.

To reduce the WLSSchemaDataSource connection usage, you can change the JMS JDBC and TLOG JDBC stores connection caching policy from Default to Minimal by using the respective connection caching policy settings. When there is a need to reduce connections in the back-end database system, Oracle recommends that you set the caching policy to Minimal . Avoid using the caching policy None because it causes a potential degradation in performance. For a detailed tuning advice about connections that are used by JDBC stores, see Configuring a JDBC Store Connection Caching Policy in Administering the WebLogic Persistent Store.

The default WLSSchemaDataSource connection pool size is 75 (size is double in the case of a GridLink DataSource). You can tune this size to a higher value depending on the size of the different FMW clusters and the candidates that are configured for migration. For example, consider a typical SOA EDG deployment with the default number of worker threads per store. If more than 25 JDBC Stores or TLOG-in-DB instances or both can fail over to the same Weblogic server, and the Connection Caching Policy is not changed from Default to Minimal, possible connection contention issues could arise. In these cases, increasing the default WLSSchemaDataSource pool size (maximum capacity) becomes necessary (each JMS store uses a minimum of two connections, and leasing and JTA are also added to compete for the pool).

Verifying Manual Failover of the Administration Server

In case a host computer fails, you can fail over the Administration Server to another host. The steps to verify the failover and failback of the Administration Server from SOAHOST1 and SOAHOST2 are detailed in the following sections.

Assumptions:

  • The Administration Server is configured to listen on ADMINVHN, and not on localhost or on any other host’s address.

    For more information about the ADMINVHN virtual IP address, see Reserving the Required IP Addresses for an Enterprise Deployment.

  • These procedures assume that the Administration Server domain home (ASERVER_HOME) has been mounted on both host computers. This ensures that the Administration Server domain configuration files and the persistent stores are saved on the shared storage device.

  • The Administration Server is failed over from SOAHOST1 to SOAHOST2, and the two nodes have these IPs:

    • SOAHOST1: 100.200.140.165

    • SOAHOST2: 100.200.140.205

    • ADMINVHN : 100.200.140.206. This is the Virtual IP where the Administration Server is running, assigned to a virtual sub-interface (for example, eth0:1), to be available on SOAHOST1 or SOAHOST2.

  • Oracle WebLogic Server and Oracle Fusion Middleware components have been installed in SOAHOST2 as described in the specific configuration chapters in this guide.

    Specifically, both host computers use the exact same path to reference the binary files in the Oracle home.

The following topics provide details on how to perform a test of the Administration Server failover procedure.
Failing Over the Administration Server When Using a Per Host Node Manager

The following procedure shows how to fail over the Administration Server to a different node (SOAHOST2). Note that even after failover, the Administration Server will still use the same Oracle WebLogic Server machine (which is a logical machine, not a physical machine).

This procedure assumes you’ve configured a per host Node Manager for the enterprise topology, as described in Creating a Per Host Node Manager Configuration. For more information, see About the Node Manager Configuration in a Typical Enterprise Deployment.

To fail over the Administration Server to a different host:

  1. Stop the Administration Server on SOAHOST1.

  2. Stop the Node Manager on SOAHOST1.

    You can use the script stopNodeManager.sh that was created in NM_HOME.

  3. Migrate the ADMINVHN virtual IP address to the second host:

    1. Run the following command as root on SOAHOST1 (where X is the current interface used by ADMINVHN) to check the virtual IP address at its CIDR:

      ip addr show dev ethX

      For example:

      ip addr show dev eth0
    2. Run the following command as root on SOAHOST1 (where X is the current interface used by ADMINVHN):

      ip addr del ADMINVHN/CIDR dev ethX
      

      For example:

      ip addr del 100.200.140.206/24 dev eth0
    3. Run the following command as root on SOAHOST2:

      ip addr add ADMINVHN/CIDR dev ethX label ethX:Y 
      

      For example:

      ip addr add 100.200.140.206/24 dev eth0 label eth0:1 

      Note:

      Ensure that the CIDR and interface to be used match the available network configuration in SOAHOST2.

  4. Update the routing tables by using arping, for example:

    arping -b -A -c 3 -I eth0 100.200.140.206
  5. From SOAHOST1, change directory to the Node Manager home directory:

    cd NM_HOME
  6. Edit the nodemanager.domains file and remove the reference to ASERVER_HOME.

    The resulting entry in the SOAHOST1 nodemanager.domains file should appear as follows:

    soaedg_domain=MSERVER_HOME;
  7. From SOAHOST2, change directory to the Node Manager home directory:

    cd NM_HOME
  8. Edit the nodemanager.domains file and add the reference to ASERVER_HOME.

    The resulting entry in the SOAHOST2 nodemanager.domains file should appear as follows:

    soaedg_domain=MSERVER_HOME;ASERVER_HOME
  9. Start the Node Manager on SOAHOST1 and restart the Node Manager on SOAHOST2.

  10. Start the Administration Server on SOAHOST2.

  11. Test that you can access the Administration Server on SOAHOST2 as follows:

    1. Ensure that you can access the Oracle WebLogic Server Administration Console using the following URL:

      http://ADMINVHN:7001/console
      
    2. Check that you can access and verify the status of components in Fusion Middleware Control using the following URL:

      http://ADMINVHN:7001/em
Validating Access to the Administration Server on SOAHOST2 Through Oracle HTTP Server

If you have configured the web tier to access AdminServer, it is important to verify that you can access the Administration Server after you perform a manual failover of the Administration Server, by using the standard administration URLs.

From the load balancer, access the following URLs to ensure that you can access the Administration Server when it is running on SOAHOST2:

  • http://admin.example.com/console

    This URL should display the WebLogic Server Administration console.

  • http://admin.example.com/em

    This URL should display Oracle Enterprise Manager Fusion Middleware Control.

Failing the Administration Server Back to SOAHOST1 When Using a Per Host Node Manager

After you have tested a manual Administration Server failover, and after you have validated that you can access the administration URLs after the failover, you can then migrate the Administration Server back to its original host.

This procedure assumes that you have configured a per host Node Manager for the enterprise topology, as described in Creating a Per Host Node Manager Configuration. For more information, see About the Node Manager Configuration in a Typical Enterprise Deployment.

  1. Stop the Administration Server on SOAHOST2.
  2. Stop the Node Manager on SOAHOST2.
  3. Run the following command as root on SOAHOST2.
    ip addr del ADMINVHN/CIDR dev ethX
    
    For example:
    ip addr del 100.200.140.206/24 dev eth0
  4. Run the following command as root on SOAHOST1:
    ip addr add ADMINVHN/CIDR dev ethX label ethX:Y  
    For example:
    ip addr add 100.200.140.206/24 dev eth0 label eth0:1

    Note:

    Ensure that the CIDR and interface to be used match the available network configuration in SOAHOST1.

  5. Update the routing tables by using arping on SOAHOST1:
    arping -b -A -c 3 -I eth0 100.200.140.206
    
  6. From SOAHOST2, change directory to the Node Manager home directory:
    cd NM_HOME
  7. Edit the nodemanager.domains file and remove the reference to ASERVER_HOME.
  8. From SOAHOST1, change directory to the Node Manager home directory:
    cd NM_HOME
  9. Edit the nodemanager.domains file and add the reference to ASERVER_HOME.
  10. Start the Node Manager on SOAHOST2 and restart the Node Manager on SOAHOST1.
  11. Start the Administration Server on SOAHOST1.
  12. Test that you can access the Oracle WebLogic Server Administration Console by using the following URL:
    http://ADMINVHN:7001/console
    
  13. Check that you can access and verify the status of components in the Oracle Enterprise Manager by using the following URL:
    http://ADMINVHN:7001/em

Configuring Listen Addresses in Dynamic Cluster Server Templates

The default configuration for dynamic managed servers in dynamic clusters is to listen on all available network interfaces. In most cases, this may be undesirable.

In preparation for disaster recovery, Oracle recommends that you use host name aliases that can be mapped to different IPs in different data centers (for example, SOAHOST1, SOAHOST2) to set each server's listen address to a specific network interface. With dynamic clusters, each server cannot be configured specifically.  There is only one listen address configuration in the cluster's server-template.  To effectively set the listen-address properly for each dynamic server in the cluster, a calculated macro must be used.  

WebLogic Server provides the "${id}" macro which corresponds to the index number of the dynamic server in the cluster.  This index starts at the numeral one ("1") and increments to the current managed server count for the cluster.  This sequentially-numbered server ID macro can be used with the recommended host naming pattern to have the Listen address calculated for each Dynamic Server to listen on a specific network interface.

This approach is recommended for enterprise deployment environments where there is only one managed server per host per cluster and the cluster is expected to scale-out horizontally only.

To configure the server-template Listen Address using the ${id} macro:

  1. Verify that the required SOAHOSTn entries in /etc/hosts are configured to the appropriate IP address for the intended machines. 
    For example:
    10.229.188.205 host1.example.com host1 SOAHOST1
    10.229.188.206 host2.example.com host2 SOAHOST2
    10.229.188.207 host3.example.com host3 WEBHOST1
    10.229.188.208 host4.example.com host4 WEBHOST2

    For information about the requirements for name resolution, see Verifying IP Addresses and Host Names in DNS or Hosts File.

  2. Browse to the Oracle WebLogic Server Administration console, and sign in with your administrative credentials.
    http://adminvhn:7001/console
  3. Lock & Edit the domain.
  4. Navigate to Clusters > Server Templates, and select the server template to be modified.
  5. Set the Listen Address value to the appropriate abstracted listener hostname, with the variable assignment as written.

    For example:

    wsmpm-server-template Listen Address = SOAHOST${id}

    Figure 20-1 Image Showing the Listen Address Value Set to SOAHOST${id}

    Description of Figure 20-1 follows
    Description of "Figure 20-1 Image Showing the Listen Address Value Set to SOAHOST${id}"
  6. Click Save.
  7. Repeat from step 4 if additional server templates need to be modified.
  8. Click Activate Changes.
  9. Restart the servers that use the template, for the changes to be effective.
Configuring Server Template Listen Addresses Using the Machine Name

If your host naming or aliasing convention does not follow a sequential numbering pattern starting at 1, to correlate to the internal ID number of each dynamic server, or you desire the cluster to scale-up with multiple managed servers per host per cluster, then an alternative configuration may be preferred. In this case, you can use the ${machineName} macro value to specify the listen address instead of using a host name prefix and server ID macro pattern. The ${machineName} macro will use the display name of the machine that is dynamically assigned to the server, and requires that the machine name be resolvable to an IP address.

To configure the server-template Listen Address with the ${machineName} macro:

  1. Browse to the Oracle WebLogic Server Administration console, and sign in with your administrative credentials:

    http://adminvhn:7001/console
  2. Navigate to Machines to review the list of machine names.

  3. Validate to ensure that these names are resolvable as network addresses, using the OS commands such as ping.

  4. Lock & Edit the domain.

  5. Navigate to Clusters and then Server Templates, and select the server-template that you want to modify.

  6. Set the Listen Address value to ${machineName} as written here. Do not substitute any other value.

  7. Click Save.

  8. Repeat from step 5 if you want to modify additional server-templates.

  9. Click Activate Changes.

  10. Restart the servers that use the modified server-template, for the changes to be effective.

Modifying the Upload and Stage Directories to an Absolute Path in an Enterprise Deployment

After you configure the domain and unpack it to the Managed Server domain directories on all the hosts, verify and update the upload and stage directories for Managed Servers in the new clusters. Also, update the upload directory for the AdminServer to have the same absolute path instead of relative, otherwise deployment issues can occur. If you implement dynamic clusters, the configuration of the server template assigned to each newly added cluster should be verified and updated, otherwise, verify and update every statically-defined Managed Server for the newly added clusters.

This step is necessary to avoid potential issues when you perform remote deployments and for deployments that require the stage mode.

To update the directory paths for the Deployment Stage and Upload locations, complete the following steps:

  1. Log in to the Oracle WebLogic Server Administration Console.

  2. In the left navigation tree, expand Domain, and then Environment.

  3. Click Lock & Edit.

  4. Navigate to and edit the appropriate objects for your cluster type.

    1. For Static Clusters, navigate to Servers and click the name of the Managed Server you want to edit.

    2. For Dynamic Clusters, navigate to Clusters > Server Templates, and click on the name of the server template to be edited.

  5. For each new Managed Server or Server Template to be edited:
    1. Click the Configuration tab, and then click the Deployment tab.

    2. Verify that the Staging Directory Name is set to the following:

      MSERVER_HOME/servers/server_or_template_name/stage
      

      Replace MSERVER_HOME with the full path for the MSERVER_HOME directory.

      If you use static clusters, update with the correct name of the Managed Server that you are editing.

      If you use dynamic clusters, leave the template name intact. For example: /u02/oracle/config/domains/soaedg_domain/servers/XYZ-server-template/stage

    3. Update the Upload Directory Name to the following value:

      ASERVER_HOME/servers/AdminServer/upload
      

      Replace ASERVER_HOME with the directory path for the ASERVER_HOME directory.

    4. Click Save.

    5. Return to the Summary of Servers or Summary of Server Templates screen as applicable.

  6. Repeat the previous steps for each of the new managed servers or dynamic cluster server templates.

  7. Navigate to and update the Upload Directory Name value for the AdminServer:

    1. Navigate to Servers, and select the AdminServer.

    2. Click the Configuration tab, and then click the Deployment Tab.

    3. Verify that the Staging Directory Name is set to the following absolute path:

      ASERVER_HOME/servers/AdminServer/stage

    4. Update the Upload Directory Name to the following absolute path:

      ASERVER_HOME/servers/AdminServer/upload

      Replace ASERVER_HOME with the directory path for the ASERVER_HOME directory.

    5. Click Save.

  8. When you have modified all the appropriate objects, click Activate Changes.

  9. Restart all Managed Servers for the changes to take effect. If you are following the EDG steps in-order and are not going to make any deployments immediately, you can wait until the next restart.

Note:

If you continue directly with further domain configurations, a restart to enable the stage and upload directory changes is not strictly necessary at this time.

Setting the Front End Host and Port for a WebLogic Cluster

You must set the front-end HTTP host and port for the Oracle WebLogic Server cluster that hosts the Oracle SOA Suite servers. You can specify these values in the Configuration Wizard while you are specifying the properties of the domain. However, when you add a SOA Cluster as part of an Oracle SOA Suite enterprise deployment, Oracle recommends that you perform this task after you verify the SOA Managed Servers.

To set the frontend host and port from the Weblogic Server Administration Console:

  1. Log in to the WebLogic Server Administration Console.
  2. In the Change Center, click Lock & Edit.
  3. In the Domain Structure panel, expand Environment, and click Clusters.
  4. On the Clusters page, click the cluster that you want to modify, and then select the HTTP tab.
  5. Use the information in Table 20-1 to add the required frontend hostname and port to each cluster.

    Table 20-1 The Frontend Hostname and Port for Each Cluster

    Name Cluster Address Frontend Host Frontend HTTP Port Frontend HTTPs

    SOA_Cluster

    Leave it empty

    soa.example.com

    80

    443

    WSM-PM_Cluster

    Leave it empty

    soainternal.example.com

    80

    Leave it empty

    OSB_Cluster

    Leave it empty

    osb.example.com

    80

    443

    ESS_Cluster

    Leave it empty

    soa.example.com

    80

    443

    BAM_Cluster

    Leave it empty

    soa.example.com

    80

    443

    MFT_Cluster

    Leave it empty

    mft.example.com

    80

    443

  6. Click Save.
  7. Click Activate Changes.
  8. Restart the managed servers of the cluster.

Enabling SSL Communication Between the Middle Tier and the Hardware Load Balancer

It is important to understand how to enable SSL communication between the middle tier and the hardware load balancer.

Note:

The following steps are applicable if the hardware load balancer is configured with SSL and the front-end address of the system has been secured accordingly.

When is SSL Communication Between the Middle Tier and Load Balancer Necessary?

In an enterprise deployment, there are scenarios where the software running on the middle tier must access the front-end SSL address of the hardware load balancer. In these scenarios, an appropriate SSL handshake must take place between the load balancer and the invoking servers. This handshake is not possible unless the Administration Server and Managed Servers on the middle tier are started by using the appropriate SSL configuration.

For example, in an Oracle SOA Suite enterprise deployment, the following examples apply:

  • Oracle Business Process Management and SOA Composer require access to the front-end load balancer URL when they attempt to retrieve role and security information through specific web instances. Some of these invocations require not only that the LBR certificate is added to the weblogic server's trust store but also that the appropriate identity key certificates are created for the SOA server's listen addresses.

  • Oracle Service Bus performs invocations to endpoints exposed in the Load Balancer SSL virtual servers.

  • Oracle SOA Suite composite applications and services often generate callbacks that need to perform invocations by using the SSL address exposed in the load balancer.

  • Finally, when you test a SOA Web services endpoint in Oracle Enterprise Manager Fusion Middleware Control, the Fusion Middleware Control software that is running on the Administration Server must access the load balancer front-end to validate the endpoint.

Generating Self-Signed Certificates Using the utils.CertGen Utility

This section describes the procedure to create self-signed certificates on SOAHOST1. Create certificates for every app-tier host by using the network name or alias of each host.

The directory where keystores and trust keystores are maintained must be on shared storage that is accessible from all nodes so that when the servers fail over (manually or with server migration), the appropriate certificates can be accessed from the failover node. Oracle recommends that you use central or shared stores for the certificates used for different purposes (for example, SSL set up for HTTP invocations). See the information on filesystem specifications for the KEYSTORE_HOME location provided in About the Recommended Directory Structure for an Enterprise Deployment.

For information on using trust CA certificates instead, see the information about configuring identity and trust in Administering Security for Oracle WebLogic Server.

About Passwords

The passwords used in this guide are used only as examples. Use secure passwords in a production environment. For example, use passwords that include both uppercase and lowercase characters as well as numbers.

To create self-signed certificates:

  1. Temporarily, set up your environment by running the following script:
    . WL_HOME/server/bin/setWLSEnv.sh

    Note that there is a dot(.) and space( ) preceding the script name in order to source the shell script in the current shell.

  2. Verify that the CLASSPATH environment variable is set:
    echo $CLASSPATH
    
  3. Verify that the shared configuration directory folder has been created and mounted to shared storage correctly, as described in Preparing the File System for an Enterprise Deployment.

    For example, use the following command to verify that the shared configuration directory is available to each host:

    df -h | grep -B1 SHARED_CONFIG_DIR

    Replace SHARED_CONFIG_DIR with the actual path to your shared configuration directory.

    You can also do a listing of the directory to ensure that it is available to the host:

    ls -al SHARED_CONFIG_DIR
  4. Create the keystore home folder structure if does not already exist.

    For example:

    cd SHARED_CONFIG_DIR
    mkdir keystores
    chown oracle:oinstall keystores
    chmod 750 keystores
    export KEYSTORE_HOME=SHARED_CONFIG_DIR/keystores
  5. Change directory to the keystore home:
    cd KEYSTORE_HOME
  6. Run the utils.CertGen tool to create the certificates for hostnames or aliases used by the managed servers and node managers, one per host.

    Note:

    You must run the utils.CertGen tool to create certificates for all the other hosts that run the Manager Servers.

    Syntax:

    java utils.CertGen key_passphrase cert_file_name key_file_name [export | domestic] [hostname]

    Examples:

    java utils.CertGen password ADMINVHN.example.com_cert \
          ADMINVHN.example.com_key domestic ADMINVHN.example.com
    
    java utils.CertGen password SOAHOST1.example.com_cert \
          SOAHOST1.example.com_key domestic SOAHOST1.example.com
    
  7. Repeat the above step for all the remaining hosts used in the system.
  8. For Dynamic clusters, in addition to ADMINVHN and one certificate for each host, a certificate matching a wildcard URL should also be generated.

    For example:

    java utils.CertGen password WILDCARD.example.com_cert \ 
    WILDCARD.example.com_key domestic \*.example.com 
    
Creating an Identity Keystore Using the utils.ImportPrivateKey Utility

This section describes how to create an Identity Keystore on SOAHOST1.example.com.

In previous sections you have created certificates and keys that reside on shared storage. In this section, the certificate and private keys created earlier for all hosts and ADMINVHN are imported into a new Identity Store. Make sure that you use a different alias for each of the certificate and key pair imported.

Note:

The Identity Store is created (if none exists) when you import a certificate and the corresponding key into the Identity Store by using the utils.ImportPrivateKey utility.

  1. Import the certificate and private key for ADMINVHN and SOAHOST1 into the Identity Store. Make sure that you use a different alias for each of the certificate and key pair imported.

    Syntax:

    java utils.ImportPrivateKey
          -certfile cert_file
          -keyfile private_key_file
          [-keyfilepass private_key_password]
          -keystore keystore
          -storepass storepass
          [-storetype storetype]
          -alias alias 
          [-keypass keypass]

    Note:

    The default keystore_type is jks.

    Examples:

    java utils.ImportPrivateKey \ 
         -certfile KEYSTORE_HOME/ADMINVHN.example.com_cert.pem \
         -keyfile KEYSTORE_HOME/ADMINVHN.example.com_key.pem \
         -keyfilepass password \
         -keystore appIdentityKeyStore.jks \ 
         -storepass password \
         -alias ADMINVHN \
         -keypass password
    
    java utils.ImportPrivateKey \ 
         -certfile KEYSTORE_HOME/SOAHOST1.example.com_cert.pem \
         -keyfile KEYSTORE_HOME/SOAHOST1.example.com_key.pem \
         -keyfilepass password \
         -keystore appIdentityKeyStore.jks \
         -storepass password \ 
         -alias SOAHOST1 \
         -keypass password
  2. Repeat the java importPrivateKey command for each of the remaining host-specific certificate and key pairs. (for example, for SOAHOST1, SOAHOST2).

    Note:

    Make sure to use a unique alias for each certificate and key pair imported.

  3. For Dynamic clusters, import the wildcard certificate and private key pair by using the custom id alias of WILDCARD.

    Example:

    ${JAVA_HOME}/bin/java utils.ImportPrivateKey \ 
    -certfile ${KEYSTORE_HOME}/WILDCARD.example.com_cert.pem \ 
    -keyfile ${KEYSTORE_HOME}/WILDCARD.example.com_key.pem \ 
    -keyfilepass password \ 
    -keystore ${KEYSTORE_HOME}/appIdentityKeyStore.jks \ 
    -storepass password \
    -alias WILDCARD \ 
    -keypass password
Creating a Trust Keystore Using the Keytool Utility

To create the Trust Keystore on SOAHOST1.example.com:

  1. Copy the standard java keystore to create the new trust keystore since it already contains most of the root CA certificates needed.

    Oracle does not recommend modifying the standard Java trust key store directly. Copy the standard Java keystore CA certificates located under the WL_HOME/server/lib directory to the same directory as the certificates. For example:

    cp WL_HOME/server/lib/cacerts KEYSTORE_HOME/appTrustKeyStore.jks
    
  2. Use the keytool utility to change the default password.

    The default password for the standard Java keystore is changeit. Oracle recommends that you always change the default password, as follows:

    keytool -storepasswd -new NewPassword -keystore TrustKeyStore -storepass Original_Password
    

    For example:

    keytool -storepasswd -new password -keystore appTrustKeyStore.jks -storepass changeit
    
  3. Import the CA certificate into the appTrustKeyStore by using the keytool utility.

    The CA certificate CertGenCA.der is used to sign all certificates generated by the utils.CertGen tool and is located at WL_HOME/server/lib directory.

    Use the following syntax to import the certificate:

    keytool -import -v -noprompt -trustcacerts -alias AliasName -file CAFileLocation -keystore KeyStoreLocation -storepass KeyStore_Password
    

    For example:

    keytool -import -v -noprompt -trustcacerts -alias clientCACert -file WL_HOME/server/lib/CertGenCA.der -keystore appTrustKeyStore.jks -storepass password
    
Importing the Load Balancer Certificate into the Truststore

For the SSL handshake to act properly, the load balancer's certificate must be added to the WLS servers truststore. To add a load balancer’s certificate:

  1. Access the site on SSL with a browser (this adds the server's certificate to the browser's repository).
  2. Obtain the certificate from the load balancer. You can obtain the load balancer certificate using a browser such as Firefox. From the browser's certificate management tool, export the certificate to a file that is on the server's file system (with a file name such as soa.example.com.crt). Alternatively, you can obtain the certificate using the openssl command. The syntax of the commands is as follows:
    openssl s_client -connect LOADBALANCER -showcerts </dev/null 2>/dev/null|openssl x509 -outform PEM > KEYSTORE_HOME/LOADBALANCER.pem

    For example:

    openssl s_client -connect soa.example.com:443 -showcerts </dev/null 2>/dev/null|openssl x509 -outform PEM > KEYSTORE_HOME/soa.example.com.crt

  3. Use the keytool to import the load balancer's certificate into the truststore:

    For example:

    keytool -import -file /oracle/certificates/soa.example.com.crt -v -keystore appTrustKeyStore.jks -alias aliasSOA -storepass password
    keytool -import -file /oracle/certificates/osb.example.com.crt -v -keystore appTrustKeyStore.jks -alias aliasOSB -storepass password
    
  4. Repeat this procedure for each SSL load balancer virtual host in your deployment.

Note:

The need to add the load balancer certificate to the WLS server truststore applies only to self-signed certificates. If the load balancer certificate is issued by a third-party CA, you have to import the public certificates of the root and the intermediate CAs into the truststore.

Adding the Updated Trust Store to the Oracle WebLogic Server Start Scripts
The setDomainEnv.sh script is provided by Oracle WebLogic Server and is used to start the Administration Server and the Managed Servers in the domain. To ensure that each server accesses the updated trust store, edit the setDomainEnv.sh script in each of the domain home directories in the enterprise deployment.
  1. Log in to SOAHOST1 and open the following file with a text editor:
    ASERVER_HOME/bin/setDomainEnv.sh
    
  2. Replace reference to the existing DemoTrust.jks entry with the following entry:

    Note:

    All the values for EXTRA_JAVA_PROPERTIES must be on one line in the file, followed by the export command on a new line.

    EXTRA_JAVA_PROPERTIES="-Djavax.net.ssl.trustStore=/u01/oracle/config/keystores/appTrustKeyStore.jks ${EXTRA_JAVA_PROPERTIES} ......." 
    export EXTRA_JAVA_PROPERTIES
  3. Make the same change to the setDomainEnv.sh file in the MSERVER_HOME/bin directory SOAHOST1, SOAHOST2.

    Note:

    The setDomainEnv.sh file cannot be copied between ASERVER_HOME/bin and MSERVER_HOME/bin as there are differences in the files for these two domain home locations. TheMSERVER_HOME/bin/setDomainEnv.sh file can be copied between hosts.

    WebLogic Serverautomatically overwrites the setDomainEnv.sh file after each domain extension. Some patches may also replace this file. Verify your customizations to setDomainEnv.sh after each of these types of maintenance operations.

Configuring OTD Node Manager to Use the Custom Keystores
The Node Managers of Oracle Traffic Director instances in web tier use SSL to communicate with AdminServer in the application tier. The WEBHOSTs are located in DMZ and Oracle recommends that you use SSL protocol in DMZ for security reasons. To configure these Node Managers to use the custom keystores, add the following lines to the end of the nodemanager.properties files located in the WEB_DOMAIN_HOME/nodemanager directory in each web tier node:
KeyStores=CustomIdentityAndCustomTrust
CustomIdentityKeyStoreFileName=Identity KeyStore
CustomIdentityKeyStorePassPhrase=Identity KeyStore Passwd
CustomIdentityAlias=Identity Key Store Alias
CustomIdentityPrivateKeyPassPhrase=Private Key used when creating Certificate

Ensure that you use the correct value for CustomIdentityAlias for the Node Manager's listen address. In WEBHOST1, use the alias WEBHOST1 and in WEBHOST2, use the alias WEBHOST2, as described in Creating an Identity Keystore Using the utils.ImportPrivateKey Utility.

Example for WEBHOST1:

KeyStores=CustomIdentityAndCustomTrust
CustomIdentityKeyStoreFileName=WEB_KEYSTORE_HOME/appIdentityKeyStore.jks
CustomIdentityKeyStorePassPhrase=password
CustomIdentityAlias=WEBHOST1
CustomIdentityPrivateKeyPassPhrase=password

In this example, WT_KEYSTORE_HOME is a local folder in WEBHOSTs, as described in Table 7-3. Ensure that you copy appIdentityKeyStore.jks from the application tier to the WT_KEYSTORE_HOME location of each web tier. For more security, you can use appIdentityKeyStore.jks that includes only the web host keys.

You have to start the node manager for the changes to be effective. The passphrase entries in the nodemanager.properties file are encrypted when you start Node Manager, as described in Starting the Node Manager on WEBHOST1 and WEBHOST2. For security reasons, minimize the time the entries in the nodemanager.properties file are left unencrypted. After you edit the file, restart Node Manager immediately so that the entries are encrypted.

Configuring WebLogic Servers to Use the Custom Keystores
Configure the WebLogic Servers to use the custom keystores by using the Oracle WebLogic Server Administration Console. Complete this procedure for the Administration Server and the Managed Servers that require access to the front-end LBR on SSL.

To configure the identity and trust keystores:

  1. Log in to the Administration Console, and click Lock & Edit.
  2. Navigate based on the Managed Server type:
    For configured Managed Servers:
    1. In the Domain Structure pane, expand Environment and select Servers.
    2. Click the name of the server for which you want to configure the identity and trust keystores.
    For dynamic Managed Servers:
    1. In the Domain Structure pane, expand Environment, then Clusters, and then select Server Templates.
    2. Click the name of the appropriate server template for which you want to configure the identity and trust keystores.
  3. Select Configuration, and then Keystores.
  4. In the Keystores field, click Change, and select Custom Identity and Custom Trust method for storing and managing private keys and digital certificate pairs and trusted CA certificates, and click Save.
  5. In the Identity section, define attributes for the identity keystore.
    • Custom Identity Keystore: Enter the fully qualified path to the identity keystore:

      KEYSTORE_HOME/appIdentityKeyStore.jks 
      
    • Custom Identity Keystore Type: Leave this field blank, it defaults to JKS.

    • Custom Identity Keystore Passphrase: Enter the password Keystore_Password you provided in Creating an Identity Keystore Using the utils.ImportPrivateKey Utility

      This attribute may be optional or required depending on the type of keystore. All keystores require the passphrase in order to write to the keystore. However, some keystores do not require the passphrase to read from the keystore. WebLogic Server reads only from the keystore, so whether or not you define this property depends on the requirements of the keystore.

  6. In the Trust section, define properties for the trust keystore:
    • Custom Trust Keystore: Enter the fully qualified path to the trust keystore:

      KEYSTORE_HOME/appTrustKeyStore.jks 
      
    • Custom Trust Keystore Type: Leave this field blank, it defaults to JKS.

    • Custom Trust Keystore Passphrase: The password you provided as the New_Password value in Creating a Trust Keystore Using the Keytool Utility.

      As mentioned in the previous step, this attribute may be optional or required depending on the type of keystore.

  7. Click Save.
  8. To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
  9. Click Lock & Edit.
  10. Select Configuration, then SSL.
  11. Update the SSL Identity details as follows:
    1. In the Private Key Alias field, enter the alias value for the appropriate private key.
      • With a Static Cluster: Enter the alias that corresponds to the host the managed server listens on.

      • With a Dynamic Cluster: Enter the wildcard alias so any dynamic managed server can match any server.

    2. In the Private Key Passphrase and the Confirm Private Key Passphrase fields, enter the password for the keystore that you created in Creating an Identity Keystore Using the utils.ImportPrivateKey Utility.
  12. Click Save.
  13. If you are updating a server template SSL configuration for a dynamic cluster, perform these additional tasks:
    1. Click the Advanced link at the bottom of the SSL view.
    2. Select the Custom Hostname Verifier option from the HostName Verification menu.
    3. Set the Custom Hostname Verifier value to: weblogic.security.utils.SSLWLSWildcardHostnameVerifier.
    4. Click Save.
  14. Click Activate Changes in the Administration Console's Change Center to make the changes take effect.
  15. Restart the Administration Server.
  16. Restart the Managed Servers where the keystore has been updated.

    Note:

    The fact that servers can be restarted by using the Administration Console and Node Manager is a good verification that the communication between Node Manager, Administration Server, and the managed servers is correct.

  17. If you use Oracle Traffic Director, restart OTD instances where the node manager keystore was updated.
Testing Composites Using SSL Endpoints

Once SSL has been enabled, composites endpoints can be verified on SSL from Oracle Enterprise Manager FMW Control. To test an SSL endpoint, follow these steps:

  1. Enter the following URL into a browser to display the Fusion Middleware Control login screen:
    http://ADMINVHN:7001/em
    

    In this example:

    • Replace ADMINVHN with the host name that is assigned to the ADMINVHN Virtual IP address in Identifying and Obtaining Software Distributions for an Enterprise Deployment.

    • Port 7001 is the typical port used for the Administration Server console and Fusion Middleware Control. However, you should use the actual URL that was displayed at the end of the Configuration Wizard session when you created the domain.

  2. Log in to Fusion Middleware Control by using the administrative user credentials.
  3. From the tree on the left, expand SOA, then click soa-infra (WLS_SOA1).
  4. Click the Deployed Composites navigation tab link.
  5. Click Composite to open the composite's dashboard view.
  6. Click the Test button and select one of the services from drop-down.
  7. In the WSDL or WADL address, replace the base URL (http://SOAHOST1:8001) with the front-end load balancer base url (https://soa.example.com:443) keeping the URI resource path and query string intact.
  8. Click Parse WSDL or WADL.
  9. Verify that the Endpoint URL shown is SSL, and no errors are returned.
  10. Test the composite. If the response is as expected for the web service, the SSL communication between the Administration Server and the Load Balancer has been configured properly.

Configuring Roles for Administration of an Enterprise Deployment

In order to manage each product effectively within a single enterprise deployment domain, you must understand which products require specific administration roles or groups, and how to add a product-specific administration role to the Enterprise Deployment Administration group.

Each enterprise deployment consists of multiple products. Some of the products have specific administration users, roles, or groups that are used to control administration access to each product.

However, for an enterprise deployment, which consists of multiple products, you can use a single LDAP-based authorization provider and a single administration user and group to control access to all aspects of the deployment. See Creating a New LDAP Authenticator and Provisioning a New Enterprise Deployment Administrator User and Group.

To be sure that you can manage each product effectively within the single enterprise deployment domain, you must understand which products require specific administration roles or groups, you must know how to add any specific product administration roles to the single, common enterprise deployment administration group, and if necessary, you must know how to add the enterprise deployment administration user to any required product-specific administration groups.

For more information, see the following topics.

Summary of Products with Specific Administration Roles

The following table lists the Fusion Middleware products that have specific administration roles, which must be added to the enterprise deployment administration group (SOA Administrators), which you defined in the LDAP Authorization Provider for the enterprise deployment.

Use the information in the following table and the instructions in Adding a Product-Specific Administration Role to the Enterprise Deployment Administration Group to add the required administration roles to the enterprise deployment Administration group.

Product Application Stripe Administration Role to be Assigned

Oracle Web Services Manager

wsm-pm

policy.updater

SOA Infrastructure

soa-infra

SOAAdmin

Oracle Service Bus

Service_Bus_Console

MiddlewareAdministrator

Enterprise Scheduler Service

ESSAPP

ESSAdmin

Oracle B2B

b2bui

B2BAdmin

Oracle MFT

mftapp

MFTAdmin

Oracle MFT

mftes

MFTESAdmin

Summary of Oracle SOA Suite Products with Specific Administration Groups

Table 20-2 lists the Oracle SOA Suite products that need to use specific administration groups.

For each of these components, the common enterprise deployment Administration user must be added to the product-specific Administration group; otherwise, you won't be able to manage the product resources by using the enterprise manager administration user that you created in Provisioning an Enterprise Deployment Administration User and Group.

Use the information in Table 20-2 and the instructions in Adding the Enterprise Deployment Administration User to a Product-Specific Administration Group to add the required administration roles to the enterprise deployment Administration group.

Table 20-2 Oracle SOA Suite Products with a Product-Specific Administration Group

Product Product-Specific Administration Group

Oracle Business Activity Monitoring

BAMAdministrator

Oracle Business Process Management

Administrators

Oracle Service Bus Integration

IntegrationAdministrators

MFT

OracleSystemGroup

Note:

MFT requires a specific user, namely OracleSystemUser, to be added to the central LDAP. This user must belong to the OracleSystemGroup group. You must add both the user name and the user group to the central LDAP to ensure that MFT job creation and deletion work properly.
Adding a Product-Specific Administration Role to the Enterprise Deployment Administration Group

For products that require a product-specific administration role, use the following procedure to add the role to the enterprise deployment administration group:

  1. Sign-in to the Fusion Middleware Control by using the administrator's account (for example: weblogic_soa), and navigate to the home page for your application.

    These are the credentials that you created when you initially configured the domain and created the Oracle WebLogic Server Administration user name (typically, weblogic_soa) and password.

  2. From the WebLogic Domain menu, select Security, and then Application Roles.
  3. For each production-specific application role, select the corresponding application stripe from the Application Stripe drop-down menu.
  4. Click Search Application Roles icon Search icon to display all the application roles available in the domain.
  5. Select the row for the application role that you are adding to the enterprise deployment administration group.
  6. Click the Edit icon Application Role Edit icon to edit the role.
  7. Click the Add icon Application Role Add icon on the Edit Application Role page.
  8. In the Add Principal dialog box, select Group from the Type drop-down menu.
  9. Search for the enterprise deployment administrators group, by entering the group name (for example, SOA Administrators) in the Principal Name Starts With field and clicking the right arrow to start the search.
  10. Select the administrator group in the search results and click OK.
  11. Click OK on the Edit Application Role page.
Adding the Enterprise Deployment Administration User to a Product-Specific Administration Group

For products with a product-specific administration group, use the following procedure to add the enterprise deployment administration user (weblogic_soa to the group. This allows you to manage the product by using the enterprise manager administrator user:

  1. Create an ldif file called product_admin_group.ldif similar to the following:
    dn: cn=product-specific_group_name, cn=groups, dc=us, dc=oracle, dc=com
    displayname: product-specific_group_display_name
    objectclass: top
    objectclass: groupOfUniqueNames
    objectclass: orclGroup
    uniquemember: cn=weblogic_soa,cn=users,dc=us,dc=oracle,dc=com
    cn: product-specific_group_name
    description: Administrators Group for the Domain
    

    In this example, replace product-specific_group_name with the actual name of the product administrator group, as shown in Table 20-2.

    Replace product-specific_group_display_name with the display name for the group that appears in the management console for the LDAP server and in the Oracle WebLogic Server Administration Console.

  2. Use the ldif file to add the enterprise deployment administrator user to the product-specific administration group.

    For Oracle Unified Directory:

    OUD_INSTANCE_HOME/bin/ldapmodify -a 
                                     -D "cn=Administrator" 
                                     -X 
                                     -p 1389 
                                     -f product_admin_group.ldif
    

    For Oracle Internet Directory:

    OID_ORACLE_HOME/bin/ldapadd -h oid.example.com 
                                -p 389 
                                -D cn="orcladmin" 
                                -w <password> 
                                -c 
                                -v 
                                -f product_admin_group.ldif

Using Persistent Stores for TLOGs and JMS in an Enterprise Deployment

The persistent store provides a built-in, high-performance storage solution for WebLogic Server subsystems and services that require persistence.

For example, the JMS subsystem stores persistent JMS messages and durable subscribers, and the JTA Transaction Log (TLOG) stores information about the committed transactions that are coordinated by the server but may not have been completed. The persistent store supports persistence to a file-based store or to a JDBC-enabled database. Persistent stores’ high availability is provided by server or service migration. Server or service migration requires that all members of a WebLogic cluster have access to the same transaction and JMS persistent stores (regardless of whether the persistent store is file-based or database-based).

For an enterprise deployment, Oracle recommends using JDBC persistent stores for transaction logs (TLOGs) and JMS.

This section analyzes the benefits of using JDBC versus File persistent stores and explains the procedure for configuring the persistent stores in a supported database. If you want to use File persistent stores instead of JDBC stores, the procedure for configuring them is also explained in this section.

Products and Components that use JMS Persistence Stores and TLOGs

Determining which installed FMW products and components utilize persistent stores can be done through the WebLogic Server Console in the Domain Structure navigation under DomainName > Services > Persistent Stores. The list indicates the name of the store, the store type (FileStore and JDBC), and the target of the store. The stores listed that pertain to MDS are outside the scope of this chapter and should not be considered.

These components (as applicable) use stores by default:
Component/Product JMS Stores TLOG Stores

B2B

Yes

Yes

BAM

Yes

Yes

BPM

Yes

Yes

ESS

No

No

HC

Yes

Yes

Insight

Yes

Yes

MFT

Yes

Yes

OSB

Yes

Yes

SOA

Yes

Yes

WSM

No

No

JDBC Persistent Stores vs. File Persistent Stores

Oracle Fusion Middleware supports both database-based and file-based persistent stores for Oracle WebLogic Server transaction logs (TLOGs) and JMS. Before you decide on a persistent store strategy for your environment, consider the advantages and disadvantages of each approach.

Note:

Regardless of which storage method you choose, Oracle recommends that for transaction integrity and consistency, you use the same type of store for both JMS and TLOGs.

About JDBC Persistent Stores for JMS and TLOGs

When you store your TLOGs and JMS data in an Oracle database, you can take advantage of the replication and high availability features of the database. For example, you can use Oracle Data Guard to simplify cross-site synchronization. This is especially important if you are deploying Oracle Fusion Middleware in a disaster recovery configuration.

Storing TLOGs and JMS data in a database also means that you do not have to identity a specific shared storage location for this data. Note, however, that shared storage is still required for other aspects of an enterprise deployment. For example, it is necessary for Administration Server configuration (to support Administration Server failover), for deployment plans, and for adapter artifacts, such as the File and FTP Adapter control and processed files.

If you are storing TLOGs and JMS stores on a shared storage device, then you can protect this data by using the appropriate replication and backup strategy to guarantee zero data loss, and you potentially realize better system performance. However, the file system protection is always inferior to the protection provided by an Oracle Database.

For more information about the potential performance impact of using a database-based TLOGs and JMS store, see Performance Considerations for TLOGs and JMS Persistent Stores.

Performance Considerations for TLOGs and JMS Persistent Stores

One of the primary considerations when you select a storage method for Transaction Logs and JMS persistent stores is the potential impact on performance. This topic provides some guidelines and details to help you determine the performance impact of using JDBC persistent stores for TLOGs and JMS.

Performance Impact of Transaction Logs Versus JMS Stores

For transaction logs, the impact of using a JDBC store is relatively small, because the logs are very transient in nature. Typically, the effect is minimal when compared to other database operations in the system.

On the other hand, JMS database stores can have a higher impact on performance if the application is JMS intensive. For example, the impact of switching from a file-based to database-based persistent store is very low when you use the SOA Fusion Order Demo (a sample application used to test Oracle SOA Suite environments), because the JMS database operations are masked by many other SOA database invocations that are much heavier.

Factors that Affect Performance

There are multiple factors that can affect the performance of a system when it is using JMS DB stores for custom destinations. The main ones are:

  • Custom destinations involved and their type

  • Payloads being persisted

  • Concurrency on the SOA system (producers on consumers for the destinations)

Depending on the effect of each one of the above, different settings can be configured in the following areas to improve performance:

  • Type of data types used for the JMS table (using raw versus lobs)

  • Segment definition for the JMS table (partitions at index and table level)

Impact of JMS Topics

If your system uses Topics intensively, then as concurrency increases, the performance degradation with an Oracle RAC database will increase more than for Queues. In tests conducted by Oracle with JMS, the average performance degradation for different payload sizes and different concurrency was less than 30% for Queues. For topics, the impact was more than 40%. Consider the importance of these destinations from the recovery perspective when deciding whether to use database stores.

Impact of Data Type and Payload Size

When you choose to use the RAW or SecureFiles LOB data type for the payloads, consider the size of the payload being persisted. For example, when payload sizes range between 100b and 20k, then the amount of database time required by SecureFiles LOB is slightly higher than for the RAW data type.

More specifically, when the payload size reach around 4k, then SecureFiles tend to require more database time. This is because 4k is where writes move out-of-row. At around 20k payload size, SecureFiles data starts being more efficient. When payload sizes increase to more than 20k, then the database time becomes worse for payloads set to the RAW data type.

One additional advantage for SecureFiles is that the database time incurred stabilizes with payload increases starting at 500k. In other words, at that point it is not relevant (for SecureFiles) whether the data is storing 500k, 1MB or 2MB payloads, because the write is asynchronized, and the contention is the same in all cases.

The effect of concurrency (producers and consumers) on the queue’s throughput is similar for both RAW and SecureFiles until the payload sizes reach 50K. For small payloads, the effect on varying concurrency is practically the same, with slightly better scalability for RAW. Scalability is better for SecureFiles when the payloads are above 50k.

Impact of Concurrency, Worker Threads, and Database Partioning

Concurrency and worker threads defined for the persistent store can cause contention in the RAC database at the index and global cache level. Using a reverse index when enabling multiple worker threads in one single server or using multiple Oracle WebLogic Server clusters can improve things. However, if the Oracle Database partitioning option is available, then global hash partition for indexes should be used instead. This reduces the contention on the index and the global cache buffer waits, which in turn improves the response time of the application. Partitioning works well in all cases, some of which will not see significant improvements with a reverse index.

Using JDBC Persistent Stores for TLOGs and JMS in an Enterprise Deployment

This section explains the guidelines to use JDBC persistent stores for transaction logs (TLOGs) and JMS. It also explains the procedures to configure the persistent stores in a supported database.

Recommendations for TLOGs and JMS Datasource Consolidation

To accomplish data source consolidation and connection usage reduction, use a single connection pool for both JMS and TLOGs persistent stores.

Oracle recommends you to reuse the WLSSchemaDatasource as is for TLOGs and JMS persistent stores under non-high workloads and consider increasing the WLSSchemaDatasource pool size. Reuse of datasource forces to use the same schema and tablespaces, and so the PREFIX_WLS_RUNTIME schema in the PREFIX_WLS tablespace is used for both TLOGs and JMS messages.

High stress (related with high JMS activity, for example) and contention in the datasource can cause stability and performance problems. For example:
  • High contention in the DataSource can cause persistent stores to fail if no connections are available in the pool to persist JMS messages.

  • High Contention in the DataSource can cause issues in transactions if no connections are available in the pool to update transaction logs.

For these cases, use a separate datasource for TLOGs and stores and a separate datasource for the different stores. You can still reuse the PREFIX_WLS_RUNTIME schema but configure separate custom datasources to the same schema to solve the contention issue.

Roadmap for Configuring a JDBC Persistent Store for TLOGs

The following topics describe how to configure a database-based persistent store for transaction logs.

  1. Creating a User and Tablespace for TLOGs

  2. Creating GridLink Data Sources for TLOGs and JMS Stores

  3. Assigning the TLOGs JDBC Store to the Managed Servers

Note:

Steps 1 and 2 are optional. To accomplish data source consolidation and connection usage reduction, you can reuse PREFIX_WLS tablespace and WLSSchemaDatasource as described in Recommendations for TLOGs and JMS Datasource Consolidation.

Roadmap for Configuring a JDBC Persistent Store for JMS

The following topics describe how to configure a database-based persistent store for JMS.

  1. Creating a User and Tablespace for JMS

  2. Creating GridLink Data Sources for TLOGs and JMS Stores

  3. Creating a JDBC JMS Store

  4. Assigning the JMS JDBC store to the JMS Servers

  5. Creating the Required Tables for the JMS JDBC Store

Note:

Steps 1 and 2 are optional. To accomplish data source consolidation and connection usage reduction, you can reuse PREFIX_WLS tablespace and WLSSchemaDatasource as described in Recommendations for TLOGs and JMS Datasource Consolidation.

Creating a User and Tablespace for TLOGs

Before you can create a database-based persistent store for transaction logs, you must create a user and tablespace in a supported database.

  1. Create a tablespace called tlogs.

    For example, log in to SQL*Plus as the sysdba user and run the following command:

    SQL> create tablespace tlogs
            logging datafile 'path-to-data-file-or-+asmvolume'
            size 32m autoextend on next 32m maxsize 2048m extent management local;
    
  2. Create a user named TLOGS and assign to it the tlogs tablespace.

    For example:

    SQL> create user TLOGS identified by password;
    
    SQL> grant create table to TLOGS;
    
    SQL> grant create session to TLOGS;
    
    SQL> alter user TLOGS default tablespace tlogs;
    
    SQL> alter user TLOGS quota unlimited on tlogs;
Creating a User and Tablespace for JMS

Before you can create a database-based persistent store for JMS, you must create a user and tablespace in a supported database.

  1. Create a tablespace called jms.

    For example, log in to SQL*Plus as the sysdba user and run the following command:

    SQL> create tablespace jms
            logging datafile 'path-to-data-file-or-+asmvolume'
            size 32m autoextend on next 32m maxsize 2048m extent management local;
    
  2. Create a user named JMS and assign to it the jms tablespace.

    For example:

    SQL> create user JMS identified by password;
    
    SQL> grant create table to JMS;
    
    SQL> grant create session to JMS;
    
    SQL> alter user JMS default tablespace jms;
    
    SQL> alter user JMS quota unlimited on jms;
    
Creating GridLink Data Sources for TLOGs and JMS Stores

Before you can configure database-based persistent stores for JMS and TLOGs, you must create two data sources: one for the TLOGs persistent store and one for the JMS persistent store.

For an enterprise deployment, you should use GridLink data sources for your TLOGs and JMS stores. To create a GridLink data source:

  1. Sign in to the Oracle WebLogic Server Administration Console.
  2. If you have not already done so, in the Change Center, click Lock & Edit.
  3. In the Domain Structure tree, expand Services, then select Data Sources.
  4. On the Summary of Data Sources page, click New and select GridLink Data Source, and enter the following:
    • Enter a logical name for the data source in the Name field.

      For the TLOGs store, enter TLOG; for the JMS store, enter JMS.

    • Enter a name for JNDI.

      For the TLOGs store, enter jdbc/tlogs; for the JMS store, enter jdbc/jms.

    • For the Database Driver, select Oracle's Driver (Thin) for GridLink Connections Versions: Any.

    • Click Next.

  5. In the Transaction Options page, clear the Supports Global Transactions check box, and then click Next.
  6. In the GridLink Data Source Connection Properties Options screen, select Enter individual listener information and click Next.
  7. Enter the following connection properties:
    • Service Name: Enter the service name of the database with lowercase characters. For a GridLink data source, you must enter the Oracle RAC service name. For example:

      soaedg.example.com

    • Host Name and Port: Enter the SCAN address and port for the RAC database, separated by a colon. For example:

      db-scan.example.com:1521
      

      Click Add to add the host name and port to the list box below the field.

      You can identify the SCAN address by querying the appropriate parameter in the database using the TCP Protocol:

      SQL>show parameter remote_listener;
      
      NAME                 TYPE        VALUE
       
      --------------------------------------------------
       
      remote_listener     string      db-scan.example.com

      Note:

      For Oracle Database 11g Release 1 (11.1), use the virtual IP and port of each database instance listener, for example:

      dbhost1-vip.example.com (port 1521) 

      and

      dbhost2-vip.example.com (1521)
      
    • Database User Name: Enter the following:

      For the TLOGs store, enter TLOGS; for the JMS persistent store, enter JMS.

    • Password: Enter the password that you used when you created the user in the database.

    • Confirm Password: Enter the password again and click Next.

  8. On the Test GridLink Database Connection page, review the connection parameters and click Test All Listeners.

    Here is an example of a successful connection notification:

    Connection test for jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=db-scan.example.com)
    (PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=soaedg.example.com))) succeeded.
    

    Click Next.

  9. In the ONS Client Configuration page, do the following:
    • Select FAN Enabled to subscribe to and process Oracle FAN events.

    • Enter the SCAN address: ONS remote port for the RAC database and the ONS remote port as reported by the database (see the following example) and click Add:

      [orcl@db-scan1 ~]$ srvctl config nodeapps -s
       
      ONS exists: Local port 6100, remote port 6200, EM port 2016
      
    • Click Next.

    Note:

    For Oracle Database 11g Release 1 (11.1), use the hostname and port of each database's ONS service, for example:

    custdbhost1.example.com (port 6200)
    

    and

    custdbhost2.example.com (6200)
    
  10. On the Test ONS Client Configuration page, review the connection parameters and click Test All ONS Nodes.

    Here is an example of a successful connection notification:

    Connection test for db-scan.example.com:6200 succeeded.

    Click Next.

  11. In the Select Targets page, select the cluster that is using the persistent store, and then select All Servers in the cluster.
  12. Click Finish.
  13. To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
  14. Repeat step 4 through step 13 to create the GridLink Data Source for JMS File Stores.
Assigning the TLOGs JDBC Store to the Managed Servers

If you are going to accomplish data source consolidation, you will reuse the <PREFIX>_WLS tablespace and WLSSchemaDatasource for the TLOG persistent store. Otherwise, ensure that you create the tablespace and user in the database, and you have created the datasource before you assign the TLOG store to each of the required Managed Servers.

  1. Log in to the Oracle WebLogic Server Administration Console.
  2. In the Change Center, click Lock and Edit.
  3. To configure the TLOG of a Managed Server, in the Domain Structure tree:
    1. For static clusters: expand Environment, then Servers, and then click the name of the Managed Server.
    2. For dynamic cluster: expand Environment, then Cluster, and Server Templates. Click the name of the server template.
  4. Select the Configuration > Services tab.
  5. Under Transaction Log Store, select JDBC from the Type menu.
  6. From the Data Source menu, select WLSSchemaDatasource to accomplish data source consolidation. The <PREFIX>_WLS tablespace will be used for TLOGs.
  7. In the Prefix Name field, specify a prefix name to form a unique JDBC TLOG store name for each configured JDBC TLOG store
  8. Click Save.
  9. Repeat steps 3 to 7 for each additional managed server or server template.
  10. To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
Creating a JDBC JMS Store

After you create the JMS persistent store user and table space in the database, and after you create the data source for the JMS persistent store, you can then use the Administration Console to create the store.

  1. Log in to the Oracle WebLogic Server Administration Console.
  2. If you have not already done so, in the Change Center, click Lock & Edit.
  3. In the Domain Structure tree, expand Services, then select Persistent Store.
  4. Click New, and then click JDBC Store.
  5. Enter a persistent store name that easily relates it to the pertaining JMS servers that is using it.

    Note:

    The length of the prefix name must not exceed 30 characters for DB versions that are below 12.2.x.x.x.

  6. To accomplish data source consolidation, select WLSSchemaDatasource. The <PREFIX>_WLS tablespace will be used for JMS persistent stores.
  7. Target the store to the entity that hosts the JTA services.

    In the static cluster case, with a server that uses service migration, the entity is the migratable target to which the JMS server belongs.

    In the case of a dynamic cluster, target to the cluster itself.

    For more information about using dynamic clusters, see Simplified JMS Configuration and High Availability Enhancements in Administering JMS Resources for Oracle WebLogic Server.

  8. Repeat steps 3 through 7 for each additional JMS server in the cluster.
  9. To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
Assigning the JMS JDBC store to the JMS Servers

After you create the JMS tablespace and user in the database, create the JMS datasource, and create the JDBC store, then you can assign the JMS persistence store to each of the required JMS Servers.

To assign the JMS persistence store to the JMS servers:
  1. Log in to the Oracle WebLogic Server Administration Console.
  2. In the Change Center, click Lock and Edit.
  3. In the Domain Structure tree, expand Services, then Messaging, and then JMS Servers.
  4. Click the name of the JMS Server that you want to use the persistent store.
  5. From the Persistent Store menu, select the JMS persistent store you created earlier.
  6. Click Save.
  7. Repeat steps 3 to 6 for each of the additional JMS Servers in the cluster.
  8. To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
Creating the Required Tables for the JMS JDBC Store

The final step in using a JDBC persistent store for JMS is to create the required JDBC store tables. Perform this task before you restart the Managed Servers in the domain.

  1. Review the information in Performance Considerations for TLOGs and JMS Persistent Stores, and decide which table features are appropriate for your environment.

    There are three Oracle DB schema definitions provided in this release and were extracted for review in the previous step. The basic definition includes the RAW data type without any partition for indexes. The second uses the blob data type, and the third uses the blob data type and secure files.

  2. Create a domain-specific well-named folder structure for the custom DDL file on shared storage. The ORACLE_RUNTIME shared volume is recommended so it is available to all servers.

    Example:

    mkdir -p ORACLE_RUNTIME/domain_name/ddl
  3. Create a jms_custom.ddl file in new shared ddl folder based on your requirements analysis.
    For example, to implement an optimized schema definition that uses both secure files and hash partitioning, create the jms_custom.ddl file with the following content:
    CREATE TABLE $TABLE (
      id     int  not null,
      type   int  not null,
      handle int  not null,
      record blob not null,
    PRIMARY KEY (ID) USING INDEX GLOBAL PARTITION BY HASH (ID) PARTITIONS 8)
    LOB (RECORD) STORE AS SECUREFILE (ENABLE STORAGE IN ROW);

    This example can be compared to the default schema definition for JMS stores, where the RAW data type is used without any partitions for indexes.

    Note that the number of partitions should be a power of two. This ensures that each partition is of similar size. The recommended number of partitions varies depending on the expected table or index growth. You should have your database administrator (DBA) analyze the growth of the tables over time and adjust the tables accordingly. See Partitioning Concepts in Database VLDB and Partitioning Guide.

  4. Use the Administration Console to edit the existing JDBC Store you created earlier; create the table that is used for the JMS data:
    1. Login in to the Oracle WebLogic Server Administration Console.
    2. In the Change Center, click Lock and Edit.
    3. In the Domain Structure tree, expand Services, then Persistent Stores.
    4. Click the persistent store you created earlier.
    5. Under the Advanced options, enter ORACLE_RUNTIME/domain_name/ddl/jms_custom.ddl in the Create Table from DDL File field.
    6. Click Save.
    7. To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
  5. Restart the Managed Servers.
Using File Persistent Stores for TLOGs and JMS in an Enterprise Deployment
This section explains the procedures to configure TLOGs and JMS File persistent stores in a shared folder.
Configuring TLOGs File Persistent Store in a Shared Folder

Oracle WebLogic Server uses the transaction logs to recover from system crashes or network failures.

Each Managed Server uses a transaction log that stores information about committed transactions that are coordinated by the server and that may not have been completed.

Oracle WebLogic Server uses this transaction log for recovery from system crashes or network failures. To leverage the migration capability of the Transaction Recovery Service for the Managed Servers within a cluster, store the transaction log in a location accessible to each Managed Server and its backup server.

Note:

To enable migration of the Transaction Recovery Service, specify a location on a persistent storage solution that is available to other servers in the cluster. All Managed Servers in the cluster must be able to access this directory. This directory must also exist before you restart the server.

The recommended location is a dual-ported SCSI disk or on a Storage Area Network (SAN). Note that it is important to set the appropriate replication and backup mechanisms at the storage level to guarantee protection in cases of a storage failure.

This information applies for file-based transaction logs. You can also configure a database-based persistent store for translation logs. See Using Persistent Stores for TLOGs and JMS in an Enterprise Deployment.

Configuring the default store directory should be performed for both static and dynamic clusters, although the procedure differs slightly in each case. Modify each server in a static cluster, or the server template in a dynamic cluster.

Configuring TLOGs File Persistent Store in a Shared Folder with a Static Cluster

To set the location for the default persistence stores for each managed server in a static cluster, complete the following steps:

  1. Log into the Oracle WebLogic Server Administration console:

    ADMINVHN:7001/console
    

    Note:

    If you have already configured web tier, use http://admin.example.com/console.

  2. In the Change Center section, click Lock & Edit.

  3. For each of the Managed Servers in the cluster:

    1. In the Domain Structure window, expand the Environment node, and then click the Servers node.

      The Summary of Servers page appears.

    2. Click the name of the server (represented as a hyperlink) in the Name column of the table.

      The settings page for the selected server appears and defaults to the Configuration tab.

    3. On the Configuration tab, click the Services tab.

    4. In the Default Store section of the page, enter the path to the folder where the default persistent stores stores its data files.

      For the enterprise deployment, use the ORACLE_RUNTIME directory location. This subdirectory serves as the central, shared location for transaction logs for the cluster. See File System and Directory Variables Used in This Guide.

      For example:

      ORACLE_RUNTIME/domain_name/cluster_name/tlogs
      

      In this example, replace ORACLE_RUNTIME with the value of the variable for your environment. Replace domain_name with the name you assigned to the domain. Replace cluster_name with the name of the cluster you just created.

    5. Click Save.

  4. Complete step 3 for all servers in the SOA_Cluster.

    Note:

    If you are configuring a default persistence store for ESS, BAM, or OSB, use ESS_Cluster, BAM_Cluster, and OSB_Cluster respectively, instead of SOA_Cluster.

  5. Click Activate Changes.

Note:

You validate the location and the creation of the transaction logs later in the configuration procedure.

Configuring TLOGs File Persistent Store in a Shared Folder with a Dynamic Cluster

To set the location for the default persistence stores for a dynamic cluster, update the server template:

  1. Log into the Oracle WebLogic Server Administration Console:

    ADMINVHN:7001/console
    

    Note:

    If you have already configured web tier, use http://admin.example.com/console.

  2. In the Change Center section, click Lock & Edit.

  3. Navigate to the server template for the cluster:

    1. In the Domain Structure window, expand the Environment and Clusters nodes, and then click the Server Templates node.

      The Summary of Server Templates page appears.

    2. Click the name of the server template (represented as a hyperlink) in the Name column of the table.

      The settings page for the selected server template appears and defaults to the Configuration tab.

    3. On the Configuration tab, click the Services tab.

    4. In the Default Store section of the page, enter the path to the folder where the default persistent stores stores its data files.

      For the enterprise deployment, use the ORACLE_RUNTIME directory location. This subdirectory serves as the central, shared location for transaction logs for the cluster. See File System and Directory Variables Used in This Guide.

      For example:

      ORACLE_RUNTIME/domain_name/cluster_name/tlogs
      

      In this example, replace ORACLE_RUNTIME with the value of the variable for your environment. Replace domain_name with the name that you assigned to the domain. Replace cluster_name with the name of the cluster you just created.

    5. Click Save.

  4. Click Activate Changes.

Note:

You validate the location and the creation of the transaction logs later in the configuration procedure.

Validating the Location and Creation of the Transaction Logs

After the WLS_SERVER_TYPE1 and WLS_SERVER_TYPE2 managed Servers are up and running, verify that the transaction log directory and transaction logs are created as expected, based on the steps that you performed in Configuring TLOGs File Persistent Store in a Shared Folder with a Static Cluster and Configuring TLOGs File Persistent Store in a Shared Folder with a Dynamic Cluster:

ORACLE_RUNTIME/domain_name/OSB_Cluster/tlogs

  • _WLS_WLS_SERVER_TYPE1000000.DAT

  • _WLS_WLS_SERVER_TYPE2000000.DAT

Configuring JMS File Persistent Store in a Shared Folder

If you have already configured and extended your domain, the JMS Persistent Files are already configured in a shared location. If you need to change any other persistent store file to the shared folder, perform the following steps:

  1. Log in to the Oracle WebLogic Server Administration Console.
  2. Navigate to Domain > Services > Persistent Store and click the name of the persistent store that you want to move to the shared folder.
    The Configuration: General tab is displayed.
  3. Change the directory to ORACLE_RUNTIME/domain_name/soa_cluster/jms.
  4. Click Save.
  5. Click Activate Changes.

About JDBC Persistent Stores for Web Services

By default, web services use the WebLogic Server default persistent store for persistence. This store provides high-performance storage solution for web services.

The default web service persistence store is used by the following advanced features:
  • Reliable Messaging

  • Make Connection

  • SecureConversation

  • Message buffering

You also have the option to use a JDBC persistence store in your WebLogic Server web service, instead of the default store. For information about web service persistence, see Managing Web Service Persistence.

Best Configuration Practices When Using RAC and Gridlink Datasources

Oracle recommends that you use GridLink data sources when you use an Oracle RAC database. If you follow the steps described in the Enterprise Deployment guide, the datasources will be configured as GridLink.

GridLink datasources provide dynamic load balancing and failover across the nodes in an Oracle Database cluster, and also receive notifications from the RAC cluster when nodes are added or removed. For more information about GridLink datasources, see Using Active GridLink Data Sources in Administering JDBC Data Sources for Oracle WebLogic Server.

Here is a summary of the best practices when using GridLink to connect to the RAC database:

  • Use a database service (defined with srvctl) different from the default database service

    In order to receive and process notifications from the RAC database, the GridLink needs to connect to a database service (defined with srvctl) instead to a default database service. These services monitor the status of resources in the database cluster and generate notifications when the status changes. A database service is used in Enterprise Deployment guide, created and configured as described in Creating Database Services.

  • Use the long format database connect string in the datasources

    When Gridlink datasources are used, the long format database connect string must be used. The Configuration Wizard does not set the long format string, it sets the short format instead. You can modify it manually later to set the long format. To update the datasources:

    1. Connect to the WebLogic Server Console and navigate to Domain Structure > Services > Datasources.
    2. Select a datasource, click the Configuration tab, and then click the Connection Pool tab.
    3. Within the JDBC URL, change the URL from jdbc:oracle:thin:[SCAN_VIP]:[SCAN_PORT]/[SERVICE_NAME] to jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=[SCAN_VIP])(PORT=[SCAN_PORT])))(CONNECT_DATA=(SERVICE_NAME=[SERVICE_NAME])))
      For example:
      jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=db-scan-address)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=soaedg.example.com)))
  • Use auto-ons

    If you are using an Oracle 12c database, the ONS list is automatically provided from the database to the driver. You can leave the ONS Nodes list empty in the datasources configuration.

  • Test Connections On Reserve

    Verify that the Test Connections On Reserve is checked in the datasources.

    Eventhough the GridLink datasources receive FAN events when a RAC instances becomes unavailable, it is a best practice to enable the Test Connections On Reserve in the datasource and ensure that the connection returned to the application is good.

  • Seconds to Trust an Idle Pool Connection

    For a maximum efficiency of the test, you can also set Seconds to Trust an Idle Pool Connection to 0, so the connections are always verified. Setting this value to zero means that all the connections returned to the application will be tested. If this parameter is set to 10, the result of the previous test will be valid for 10 seconds and if a connection is reused before the lapse of 10 seconds, the result will still be valid.

  • Test Frequency

    Verify that the Test Frequency parameter value in the datasources is not 0. This is the number of seconds a WebLogic Server instance waits between attempts when testing unused connections. The default value of 120 is normally enough.

Performing Backups and Recoveries for an Enterprise Deployment

It is recommended that you follow the below mentioned guidelines to make sure that you back up the necessary directories and configuration data for an Oracle SOA Suite enterprise deployment.

Note:

Some of the static and runtime artifacts listed in this section are hosted from Network Attached Storage (NAS). If possible, backup and recover these volumes from the NAS filer directly rather than from the application servers.

For general information about backing up and recovering Oracle Fusion Middleware products, see the following sections in Administering Oracle Fusion Middleware:

Table 20-3 lists the static artifacts to back up in a typical Oracle SOA Suite enterprise deployment.

Table 20-3 Static Artifacts to Back Up in the Oracle SOA Suite Enterprise Deployment

Type Host Tier

Database Oracle home

DBHOST1 and DBHOST2

Data Tier

Oracle Fusion Middleware Oracle home

WEBHOST1 and WEBHOST2

Web Tier

Oracle Fusion Middleware Oracle home

SOAHOST1 and SOAHOST2 (or NAS Filer)

Application Tier

Installation-related files

WEBHOST1, WEHOST2, and shared storage

N/A

Table 20-4 lists the runtime artifacts to back up in a typical Oracle SOA Suite enterprise deployment.

Table 20-4 Run-Time Artifacts to Back Up in the Oracle SOA Suite Enterprise Deployment

Type Host Tier

Administration Server domain home (ASERVER_HOME)

SOAHOST1 (or NAS Filer)

Application Tier

Application home (APPLICATION_HOME)

SOAHOST1 (or NAS Filer)

Application Tier

Oracle RAC databases

DBHOST1 and DBHOST2

Data Tier

Scripts and Customizations

Per host

Application Tier

Deployment Plan home (DEPLOY_PLAN_HOME)

SOAHOST1 (or NAS Filer)

Application Tier

OHS/OTD Configuration directory

WEBHOST1 and WEBHOST2

Web Tier

Online Domain Run-Time Artifacts Backup/Recovery Example

This section describes an example procedure to implement a backup of the domain runtime artifacts. This approach can be used during the EDG configuration process, for example, before extending the domain to add a new component.

This example has the following features:

  • App tier Runtime Artifacts are backed up/recovered in this example:
    Artifact Host Tier

    Administration Server domain home (ASERVER_HOME)

    SOAHOST1 (or NAS Filer)

    Application Tier

    Application home (APPLICATION_HOME)

    SOAHOST1 (or NAS Filer)

    Application Tier

    Deployment Plan home (DEPLOY_PLAN_HOME)

    SOAHOST1 (or NAS Filer)

    Application Tier

    Runtime artifacts (adapter control files) (ORACLE_RUNTIME)

    SOAHOST1 (or NAS Filer)

    Application Tier

    Scripts and Customizations

    Per host

    Application Tier

  • This backup procedure is suitable for cases when a major configuration change is done to the domain (that is, domain extension). If something goes wrong, or if you make incorrect selections, you can restore the domain configuration to the earlier state.

    Database backup/restore is not mandatory for this sample procedure, but steps to backup/restore the database are included as optional.

    Artifact Host Tier

    Oracle RAC database (optional)

    Oracle RAC database (optional)

    Data Tier

  • Operating system tools are used in this example. Some of the run-time artifacts listed in this section are hosted from Network Attached Storage (NAS). If possible, do the backup and recovery of these volumes from the NAS filer directly rather than from the application servers.
  • Managed servers are running during the backup. MSERVER_HOME is not backed up and pack/unpack procedure is used later to recover MSERVER_HOME. Therefore, managed server lock files are not included in the backup.
  • AdminServer can be running during the backup if .lok files are excluded from the backup. To avoid an inconsistent backup, do not make any configuration changes until the backup is complete. To ensure that no changes are made in the WebLogic Server domain, you can lock the WebLogic Server configuration.

    Note:

    Excluding these:
    • AdminServer/data/ldap/ldapfiles/EmbeddedLDAP.lok
    • AdminServer/tmp/AdminServer.lok
Back Up the Domain Run-Time Artifacts

To backup the domain runtime artifacts, perform the following steps:

  1. Log in to SOAHOST1 with user oracle and ensure that you define and export the following variables:
    Variable Example Value Description

    BAK_TAG

    BEFORE_BPM

    Descriptive tag used in the names of the backup files and database restore point.

    BAK_DIR

    /backups

    Host folder where backup files are stored.

    DOMAIN_NAME

    soaedg_domain

    Domain name

    For example:
    export BAK_TAG=BEFORE_BPM
    export DOMAIN_NAME=soaedg_domain
    export BAK_DIR=/backups
  2. Ensure that the following domain variables are set with the values of the domain:
    Variable Example Value

    ASERVER_HOME

    /u01/oracle/config/domains/soaedg_domain

    DEPLOY_PLAN_HOME

    /u01/oracle/config/dp

    APPLICATION_HOME

    /u01/oracle/config/applications/soaedg_domain

    ORACLE_RUNTIME

    /u01/oracle/runtime

    See Table 7-2.

  3. Before you make the backup, lock the domain configuration, so you prevent other accounts from making changes during your edit session. To lock the domain configuration from Fusion Middleware Control:
    1. Log in to http://ADMINVHN:7001/em.
    2. Locate the Change Center at the top of Fusion Middleware Control.
    3. From the Changes menu, select Lock & Edit to lock the configuration edit for the domain.

    Note:

    To avoid an inconsistent backup, do not make any configuration changes until the backup is complete.
  4. Log in to SOAHOST1 and clean the logs and backups applications before the backup:
    find ${ASERVER_HOME}/servers/AdminServer/logs -type f -name "*.out0*" ! -size 0c -print -exec rm -f {} \+
    find ${ASERVER_HOME}/servers/AdminServer/logs -type f -name "*.log0*" ! -size 0c -print -exec rm -f {} \+
    find ${APPLICATION_HOME} -type f -name "*.bak*" -print -exec rm -f {} \;
  5. Perform the backup of each artifact by using tar:
    tar -cvzf  ${BAK_DIR}/backup_aserver_home_${DOMAIN_NAME}_${BAK_TAG}.tgz   ${ASERVER_HOME} --exclude ".lok" 
    
    tar -cvzf ${BAK_DIR}/backup_dp_home_${DOMAIN_NAME}_${BAK_TAG}.tgz    ${DEPLOY_PLAN_HOME}/${DOMAIN_NAME} 
    
    tar -cvzf ${BAK_DIR}/backup_app_home_${DOMAIN_NAME}_${BAK_TAG}.tgz   ${APPLICATION_HOME}  
    
    tar -cvzf ${BAK_DIR}/backup_runtime_${DOMAIN_NAME}_${BAK_TAG}.tgz         ${ORACLE_RUNTIME}/${DOMAIN_NAME} 
    
    ls --format=single-column ${BAK_DIR}/backup_aserver_*.tgz
    ls --format=single-column ${BAK_DIR}/backup_dp_*.tgz
    ls --format=single-column ${BAK_DIR}/backup_app_*.tgz
    ls --format=single-column ${BAK_DIR}/backup_runtime_*.tgz
  6. Release the domain lock.
    1. Log in to http://ADMINVHN:7001/em.
    2. Locate the Change Center at the top of Fusion Middleware Control.
    3. From the Changes menu, select Release Configuration to release the configuration edit for the domain.
  7. Backup your scripts and customizations, if needed.
  8. (Optional) Log in to the database and create a flashback database restore point:

    Note:

    Flash database technology is used in this example for database recovery. Check your database version’s documentation for more information about Flashback.
    1. Create flashback guaranteed checkpoint.
      sqlplus / as sysdba
      SQL> create restore point BEFORE_BPM guarantee flashback database;
      SQL> alter system switch logfile;
    2. Verify.
      SQL> set linesize 300
      SQL> column name format a30
      SQL> column time format a32
      SQL> column storage_size format 999999999999
      SQL> SELECT name, guarantee_flashback_database, time, storage_size FROM v$restore_point ORDER BY time;
      
      Example:
      NAME                           GUA TIME                              STORAGE_SIZE
      ------------------------------ --- -------------------------------- -------------
      SOAEDG_BEFORE_BPM              YES 12-MAY-17 03.29.28.000000000 AM     8589934592
      exit
Restore the Domain Run-Time Artifacts
To recover the domain to the point where the backups where made, follow these steps:
  1. Log in to SOAHOST1 using the oracle user.
  2. Stop all the servers in the domain, including the AdminServer.
    ${ORACLE_COMMON_HOME}/common/bin/wlst.sh
    connect('<weblogic_admin_username>','<password>','t3://adminvhn:7001'))
    
    shutdown('OSB_Cluster', 'Cluster', force='true')
    shutdown('ESS_Cluster', 'Cluster', force='true')
    shutdown('BAM_Cluster','Cluster', force='true')
    shutdown('MFT_Cluster','Cluster', force='true')
    shutdown('SOA_Cluster','Cluster', force='true')
    shutdown('WSM-PM_Cluster','Cluster', force='true')
    
    state('SOA_Cluster', 'Cluster')
    state('OSB_Cluster', 'Cluster')
    state('ESS_Cluster', 'Cluster')
    state('BAM_Cluster', 'Cluster')
    state('MFT_Cluster', 'Cluster')
    state('WSM-PM_Cluster' , 'Cluster')
    
    shutdown('AdminServer', force='true', block='true')
  3. Ensure that the following domain variables are set with the values of the domain:
    Variable Example Value

    ASERVER_HOME

    /u01/oracle/config/domains/soaedg_domain

    DEPLOY_PLAN_HOME

    /u01/oracle/config/dp

    APPLICATION_HOME

    /u01/oracle/config/applications/soaedg_domain

    ORACLE_RUNTIME

    /u01/oracle/runtime

  4. Remove the current folders by renaming them. You can remove these folders completely at the end of the process after you have verified the recovered domain.
    1. In SOAHOST1:
      mv  ${ASERVER_HOME}  ${ASERVER_HOME}_DELETE
      mv  ${DEPLOY_PLAN_HOME}/${DOMAIN_NAME}  ${DEPLOY_PLAN_HOME}/${DOMAIN_NAME}_DELETE
      mv  ${APPLICATION_HOME}   ${APPLICATION_HOME}_DELETE
      mv  ${ORACLE_RUNTIME}/${DOMAIN_NAME} ${ORACLE_RUNTIME}/${DOMAIN_NAME}_DELETE
    2. In each SOAHOSTN:
      mv   ${MSERVER_HOME}  ${MSERVER_HOME}_DELETE
  5. Locate and identify the backups in the backup folder. Ensure that you define and export the following variables with the correct values of the backup you want to recover:
    Variable Example Value Description

    BAK_TAG

    BEFORE_BPM

    Descriptive tag used in the names of the backup files and database restore point.

    BAK_DIR

    /backups

    Host folder where backup files are stored.

    DOMAIN_NAME

    soaedg_domain

    Domain name

    For example:
    export BAK_TAG=BEFORE_BPM
    export DOMAIN_NAME=soaedg_domain
    export BAK_DIR=/backups
  6. Perform the recovery of the files by extracting the files.

    Note:

    TAR files will recreate the structure beginning with /, so you need to go to / folder.
    cd  /
    tar -xzvf ${BAK_DIR}/backup_aserver_home_${DOMAIN_NAME}_${BAK_TAG}.tgz   
    tar -xzvf ${BAK_DIR}/backup_dp_home_${DOMAIN_NAME}_${BAK_TAG}.tgz    
    tar -xzvf ${BAK_DIR}/backup_app_home_${DOMAIN_NAME}_${BAK_TAG}.tgz   
    tar -xzvf ${BAK_DIR}/backup_runtime_${DOMAIN_NAME}_${BAK_TAG}.tgz
  7. (Optional) If you need to recover the database to the flashback recovery point, perform the following steps:
    1. Log in to DBHOST with oracle user and stop the database:
      srvctl stop database -database soaedgdb
    2. Log in to the database and flashback database to the restore point:
      sqlplus / as sysdbaSQL>
                startup mountSQL>
                FLASHBACK DATABASE TO RESTORE POINT BEFORE_BPM;
          Flashback complete.
    3. Start database with this command:
      SQL> ALTER DATABASE OPEN RESETLOGS;
  8. Start AdminServer:
    ${ORACLE_COMMON_HOME}/common/bin/wlst.sh
    wls:/offline> nmConnect('nodemanager','password','ADMINVHN','5556', 'domain_name','ASERVER_HOME','PLAIN')
    Connecting to Node Manager ...
    Successfully Connected to Node Manager.
    wls:/nm/domain_name > nmStart('AdminServer')
  9. Propagate the domain to the Managed Servers.
    1. Sign in to SOAHOST1 and run the pack command to create the template, as follows:
      cd ${ORACLE_COMMON_HOME}/common/bin
      ./pack.sh -managed=true 
                -domain=ASERVER_HOME \ 
                -template=/full_path/recover_domain.jar \ 
                -template_name=recover_domain_template \
                -log_priority=DEBUG \ 
                -log=/tmp/pack.log
      • Replace ASERVER_HOME with the actual path to the domain directory you created on the shared storage device.
      • Replace /full_path/ with the complete path where you want to create the domain template jar file.
      • recover_domain.jar is an example of the name for the jar file that you are creating.
      • recover_domain_template is an example of the name for the jar file that you are creating.
    2. Run the unpack command in every SOAHOST, as follows:
      cd ORACLE_COMMON_HOME/common/bin
      
      ./unpack.sh -domain=MSERVER_HOME \
                  -overwrite_domain=true \
                  -template=/full_path/recover_domain.jar
                  -log_priority=DEBUG \
                  -log=/tmp/unpack.log \
                  -app_dir=APPLICATION_HOME \
      
      • Replace MSERVER_HOME with the complete path to the domain home to be created on the local storage disk. This is the location where the copy of the domain will be unpacked.
      • Replace /full_path/ recover_domain.jar with the complete path and file name of the domain template jar file that you created when you ran the pack command to pack up the domain on the shared storage device.
  10. Recover/perform customizations, if needed.
  11. Start the servers and verify the domain.
  12. After checking that everything is correct, you can delete the previous renamed folders:
    1. In SHOAHOST1:
      rm –rf   ${ASERVER_HOME}_DELETE
      rm –rf   ${KEYSTORE_HOME}_DELETE
      rm –rf   ${DEPLOY_PLAN_HOME}/${DOMAIN_NAME}_DELETE
      rm –rf   ${APPLICATION_HOME}_DELETE
      rm –rf   ${ORACLE_RUNTIME}/${DOMAIN_NAME}_DELETE
    2. In every SOAHOSTN:
      rm –rf   ${MSERVER_HOME}_DELETE

Configuration and Management Tasks for an Oracle SOA Suite Enterprise Deployment

These are some of the key configuration and management tasks that you likely need to perform on an Oracle SOA Suite enterprise deployment.

Deploying Oracle SOA Suite Composite Applications to an Enterprise Deployment

Oracle SOA Suite applications are deployed as composites, consisting of different kinds of Oracle SOA Suite components. SOA composite applications include the following:

  • Service components such as Oracle Mediator for routing, BPEL processes for orchestration, BAM processes for orchestration (if Oracle BAM Suite is also installed), human tasks for workflow approvals, spring for integrating Java interfaces into SOA composite applications, and decision services for working with business rules.

  • Binding components (services and references) for connecting SOA composite applications to external services, applications, and technologies.

These components are assembled into a single SOA composite application.

When you deploy an Oracle SOA Suite composite application to an Oracle SOA Suite enterprise deployment, be sure to deploy each composite to a specific server or cluster address and not to the load balancer address (soa.example.com).

Deploying composites to the load balancer address often requires direct connection from the deployer nodes to the external load balancer address. As a result, you have to open additional ports in the firewalls.

For more information about Oracle SOA Suite composite applications, see the following sections in Administering Oracle SOA Suite and Oracle Business Process Management Suite:

Using Shared Storage for Deployment Plans and SOA Infrastructure Applications Updates

When you redeploy a SOA infrastructure application or resource adapter within the SOA cluster, the deployment plan along with the application bits should be accessible to all servers in the cluster.

SOA applications and resource adapters are installed using nostage deployment mode. Because the administration sever does not copy the archive files from their source location when the nostage deployment mode is selected, each server must be able to access the same deployment plan.

To ensure deployment plan location is available to all servers in the domain, use the Deployment Plan home location described in File System and Directory Variables Used in This Guide and represented by the DEPLOY_PLAN_HOME variable in the Enterprise Deployment Workbook.

Managing Database Growth in an Oracle SOA Suite Enterprise Deployment

When the amount of data in the Oracle SOA Suite database grows very large, maintaining the database can become difficult, especially in an Oracle SOA Suite enterprise deployment where potentially many composite applications are deployed.

See the following sections in Administering Oracle SOA Suite and Oracle Business Process Management Suite:

Managing the JMS Messages in a SOA Server

There are several procedures to manage JMS messages in a SOA server. You may need to perform these procedures in some scenarios, for example, to preserve the messages during a scale-in operation.

This section explains some of these procedures in detail.

Draining the JMS Messages from a SOA Server

The process of draining the JMS messages helps you clear out the messages from a particular WebLogic server. A basic approach to drain stores consists of stopping the message production in the appropriate JMS Servers and allowing the applications to consume the messages.

This procedure, however, is application dependent, and could take an unpredictable amount of time. As an alternative, general instructions are provided here for saving the current messages from their current JMS destinations and, when/if required, importing them into a different server.

The draining procedure is useful in scale-in/down scenarios, where the size of the cluster is reduced by removing one or more servers. You can ensure that no messages are lost  by draining the messages from the server that you delete, and then importing them into another server in the cluster.

You can also use this procedure in some disaster recovery maintenance scenarios, when the servers are started in a secondary location by using an Snapshot Standby database. In this case, you may need to drain the messages from the domain before starting it in the secondary location to avoid their consumption in the standby domain when you start the domain (otherwise, duplicate executions could take place). You cannot import messages in this scenario.

To drain the JMS messages from a server, perform the following steps:
  1. Stop a new workload by pausing production for the JMS Server. You must do this activity for each JMS Server of the server that is affected in the operation:
    1. Navigate to the WebLogic Console and click Environment > Services> JMS Server ><JMS Server name>> Control.
    2. Select the JMS Server of the server that you want to delete.
    3. Click Production, and then click Pause.
  2. Drain the messages from the destinations. To drain the JMS messages, you can let applications consume the pending messages. However, this task is application dependent and may take time. Hence, Oracle recommends you to export the messages of each destination. Verify which destinations have messages:
    1. Navigate to the WebLogic Console and click Environment > Services> JMS Server> Monitoring > Active Destination.
    2. Look whether the destination members of the server that you want to delete have current messages. Identify the destination name and its JMS Module.
    3. Repeat this activity for each JMS Server that is running in the server that you want to delete.
    • Drain messages from queues: For those queue destinations that have current messages:

      1. Navigate to the WebLogic Console and click Environment > Services > JMS Module > <JMS module name> > <destination name>.

      2. Click Monitoring.

      3. Select the queue corresponding with the server that you want to delete and click Show Messages.

      4. Select Export > Export All and export the messages to a file. Make a note of the file name for later use

      5. Delete the exported messages by using the Delete All option. This step is important to avoid message duplications.

    • Drain messages from topics

      Oracle recommends you to drain and import messages from topics only if they have a critical business impact. See Table 20-5 for details about the purpose and business impact for each topic. Only the loss of messages in the topic dist_EDNTopic_auto, used by EDN, has a business impact.

      Table 20-5 Details of the Purpose and Business Impact for Each Topic of a Component

      Component JMS Module JMS Topic Name Purpose Business Impact of Message Loss

      BPM

      BPMJMSModule

      dist_MeasurementTopic_auto

      Used for publishing process metrics messages to the internal process star schema.

      Low impact.

      Will affect some dashboard number appearing in the PCS workspace dashboards and BAM  dashboards based on the process star schema data object.

      BPM

      BPMJMSModule

      dist_PeopleQueryTopic_auto

      Used for updating logical group memberships.

      Low impact.

      The group membership will be recalculated based on a scheduler.

      SOA

      SOAJMSModule

      dist_B2BBroadcastTopic_auto

      Used by B2B, messages are meant to be consumed immediately.

      No impact.

      SOA

      SOAJMSModule

      dist_EDNTopic_auto

      Used for EDN, contains event messages for applications.

      Business impact.

      Applications that consume these EDN event messages will lose them.

      SOA

      SOAJMSModule

      dist_TenantTopic_auto

      No longer used.

      No impact.

      SOA

      SOAJMSModule

      dist_XmlSchemaChangeNotificationTopic_auto

      No longer used.

      No impact.

      Insight

      ProcMonJMSModule

      dist_ProcMonActivationTopic_auto

      Used by Insight for lifecycle operations - for activating an insight model across different nodes of the cluster.

      No impact.

      BAM

      BAMJMSSystemResource

      dist_oracle.beam.cqs.activedata_auto

      Not used in production.

      No impact.

      BAM

      BAMJMSSystemResource

      dist_oracle.beam.persistence.activedata_auto

      Data change notifications sent from persistence to the continuous query processor in support of active-data queries.

      Low impact.

      Message loss could only cause incorrect data to be displayed in the active-data dashboards. Refreshing the dashboards or restarting the active-query will restore the correct data.

      BAM

      BAMJMSSystemResource

      dist_oracle.beam.server.event.reportcache.changelist_auto

      Data changes sent from the report cache to the active-data dashboards.

      BAM

      BAMJMSSystemResource

      dist_oracle.beam.server.metadatachange_auto

      Metadata changes sent to the downstream listeners if artifacts (queries, views, dashboards) are modified.

      MFT

      MFTJMSModule

      dist_MFTSystemEventTopic_auto

      Used for publishing events that require synch in all the nodes, such as activation of the listening source, adding the PGP key, Mbean property changes, and so on.

      Low impact.

      These messages are very short lived and their frequency is low. If there is any message loss, a restart ensures that all nodes in sync.

      Follow these steps drain messages from the topics:
      1. Navigate to the WebLogic Console and click Environment > Services > JMS Module > <JMS module name> > <topic name>.

      2. Click Monitoring, and then click Durable Subscribers.

      3. Select the topic corresponding to the server that you want to delete and click Apply. The page displays the subscriptions only for the selected member topic.

      4. Select the Durable Subscriber that has current messages and clickShow Messages.

      5. Click Export > Export All and export the messages to a file. Make a note of the file name for later use.

      6. Delete the exported messages from the subscriber by clicking Delete > Delete All. This step is important to avoid message duplications.

      7. Repeat the export process for any subscriber in the topic that has current messages.

Importing the JMS Messages into a SOA Server

Messages that have been previously exported can be imported in another or the same member of the JMS destination. This procedure is used in scale-in/down scenarios, to import the messages from the server that you want to remove, to another member in the cluster.

To import the JMS messages, perform the following steps:
  • Import messages in a queue:

    1. Navigate to the WebLogic Console and click Environment > Services > JMS Module > <JMS module name> > <queue name>.

    2. Click Monitoring.

    3. Select the destination of the server where you want to import the messages and click Show Messages.

    4. Select Import to import the messages of this destination.

    5. Repeat the steps for each queue destination.

  • Import messages in a topic:

    1. Navigate to the WebLogic Console and click Environment > Services > JMS Module > <JMS module name> > <topic name>.

    2. Click Monitoring, and then click Durable Subscribers.

    3. Choose the topic member where you want to import the messages and click Apply.

    4. Select the durable subscriber where you want to import the messages and click Show Messages.

    5. Click Import and select the file with the messages of this subscriber.

    6. Repeat the steps for each subscriber in the topic where you have to import messages.

Considerations for Cross-Component Wiring

Cross-Component Wiring (CCW) enables the FMW components to publish and bind to some of the services available in a WLS domain, by using specific APIs.

CCW performs a bind of the wiring information only during the Configuration Wizard session or when manually forced by the WLS domain Administrator. When you add a Weblogic Server to a cluster (in a scale out and scale up operation in a static or dynamic cluster), although the new server publishes its services, all the clients that use the service are not automatically updated and bound to the new service provider. The update does not happen because the existing servers that are already bound to a CCW table, do not automatically know about the new member that joins the cluster. It is the same case with ESS and WSMPM when they provide their services to SOA: both publish their service to the service table dynamically, but SOA servers do not know about these updates unless a bind is forced again.

Note:

There is an additional cross-component wiring information similar to the one used by the OHS configuration, which is not affected by this wiring because of the proxy plug-in behavior. For more information, see the following sections:

Cross-Component Wiring for WSMPM and ESS

The cross-component wiring t3 information is used by WSMPM and ESS to obtain the list of severs to be used in a JNDI invocation URL.

The CCW t3 information limits the impact of the lack of dynamic updates. When the invocation is done, the JNDI URL is used to obtain the RMI stubs with the list of members in the cluster. The JNDI URL does not need to contain the entire list of servers. The RMI stubs contain the list of all the servers in the cluster at any given time, and are used to load balance requests across all of them. Therefore, without a bind, the servers that are added to the cluster are used even if not present in the bind URL. The only drawback is that at least one of the original servers provided in the first CCW bind must be up to keep the system working when the cluster expands or shrinks. To avoid this issue, you can use the cluster name syntax in the service table instead of using the static list of members.

The cluster name syntax is as follows:
cluster:t3://cluster_name

When you use cluster:t3://cluster_name, the CCW invocation fetches the complete list of members in the cluster at any given time, thus avoiding any dependencies on the initial servers and accounting for every member that is alive in the cluster then.

Using the cluster_name Syntax with WSMPM

This procedure makes WSMPM use a t3 syntax that accounts for servers being added or removed from the WSMPM cluster without having to reupdate the CCW information.

The CCW t3 information is configured to use the cluster syntax by default. You only need to verify that the cluster syntax is used and edit, if required.

  1. Sign-in to the Fusion Middleware Control by using the administrator's account. For example: weblogic_soa.
  2. From the WebLogic Domain drop-down menu, select Cross component Wiring- Service Tables.
  3. Select the OWSM Policy Manager urn:oracle:fmw.owsm-pm:t3 row.
  4. Verify that the cluster syntax is used. If not, click Edit and update the t3 and t3s values with the cluster name syntax.
  5. Click OK.
  6. From the WebLogic Domain drop-down menu, select Cross component Wiring - Components.
  7. Select OWSM Agent.
  8. In the Client Configuration section, select the owsm-pm-connection-t3 row and click Bind.
  9. Click OK.

Note:

The wiring table is updated with each cluster scale out or scale up, but it does not replace the cluster syntax until a manual rebind is used. Hence, it withstands all updates (additions and removals) in the lifecycle of the cluster.