18 Common Configuration and Management Tasks for an Enterprise Deployment

The configuration and management tasks that may need to be performed on the enterprise deployment environment are detailed in this section.

Configuration and Management Tasks for All Enterprise Deployments

Complete these common configuration tasks that apply to any Oracle Fusion Middleware enterprise deployment. These tasks include checking the sizing requirements for the deployment, using the JDBC persistence store for web services, and taking backups of the deployment.

Verifying Appropriate Sizing and Configuration for the WLSSchemaDataSource

In Oracle FMW 14.1.2, WLSRuntimeSchemaDataSource is the common datasource that is reserved for use by the FMW components for JMS JDBC Stores, JTA JDBC stores, and Leasing services. WLSRuntimeSchemaDataSource is used to avoid contention in critical WLS infrastructure services and to guard against dead-locks.

To reduce the WLSRuntimeSchemaDataSource connection usage, you can change the JMS JDBC and TLOG JDBC stores connection caching policy from Default to Minimal by using the respective connection caching policy settings. When there is a need to reduce connections in the back-end database system, Oracle recommends that you set the caching policy to Minimal . Avoid using the caching policy None because it causes a potential degradation in performance. For a detailed tuning advice about connections that are used by JDBC stores, see Configuring a JDBC Store Connection Caching Policy in Administering the WebLogic Persistent Store.

The default WLSRuntimeSchemaDataSource connection pool size is 75 (size is double in the case of a GridLink DataSource). You can tune this size to a higher value depending on the size of the different FMW clusters and the candidates that are configured for migration. For example, consider a typical WCC EDG deployment with the default number of worker threads per store. If more than 25 JDBC Stores or TLOG-in-DB instances or both can fail over to the same Weblogic server, and the Connection Caching Policy is not changed from Default to Minimal, possible connection contention issues could arise. In these cases, increasing the default WLSRuntimeSchemaDataSource pool size (maximum capacity) becomes necessary (each JMS store uses a minimum of two connections, and leasing and JTA are also added to compete for the pool).

Verifying Manual Failover of the Administration Server

In case a host computer fails, you can fail over the Administration Server to another host. The steps to verify the failover and failback of the Administration Server from WCCHOST1 and WCCHOST2 are detailed in the following sections.

Assumptions:

  • The Administration Server is configured to listen on ADMINVHN or any custom virtual host that maps to a floating IP/VIP. It should not listen on ANY (blank listen address), localhost or any host name that uniquely identifies a single node.

    For more information about the ADMINVHN virtual IP address, see Reserving the Required IP Addresses for an Enterprise Deployment.

  • These procedures assume that the Administration Server domain home (ASERVER_HOME) has been mounted on both host computers. This ensures that the Administration Server domain configuration files and the persistent stores are saved on the shared storage device.

  • The Administration Server is failed over from WCCHOST1 to WCCHOST2, and the two nodes have these IPs:

    • WCCHOST1: 100.200.140.165

    • WCCHOST2: 100.200.140.205

    • ADMINVHN : 100.200.140.206. This is the Virtual IP where the Administration Server is running, assigned to a virtual sub-interface (for example, eth0:1), to be available on WCCHOST1 or WCCHOST2.

  • Oracle WebLogic Server and Oracle Fusion Middleware components have been installed in WCCHOST2 as described in the specific configuration chapters in this guide.

    Specifically, both host computers use the exact same path to reference the binary files in the Oracle home.

The following topics provide details on how to perform a test of the Administration Server failover procedure.
Failing Over the Administration Server When Using a Per Host Node Manager

The following procedure shows how to fail over the Administration Server to a different node (WCCHOST2). Note that even after failover, the Administration Server will still use the same Oracle WebLogic Server machine (which is a logical machine, not a physical machine).

This procedure assumes you’ve configured a per host Node Manager for the enterprise topology, as described in Creating a Per Host Node Manager Configuration. For more information, see About the Node Manager Configuration in a Typical Enterprise Deployment.

To fail over the Administration Server to a different host:

  1. Stop the Administration Server on WCCHOST1.

  2. Stop the Node Manager on WCCHOST1.

    You can use the script stopNodeManager.sh that was created in NM_HOME.

  3. Migrate the ADMINVHN virtual IP address to the second host:

    1. Run the following command as root on WCCHOST1 (where X is the current interface used by ADMINVHN) to check the virtual IP address at its CIDR:

      ip addr show dev ethX

      For example:

      ip addr show dev eth0
    2. Run the following command as root on WCCHOST1 (where X is the current interface used by ADMINVHN):

      ip addr del ADMINVHN/CIDR dev ethX
      

      For example:

      ip addr del 100.200.140.206/24 dev eth0
    3. Run the following command as root on WCCHOST2:

      ip addr add ADMINVHN/CIDR dev ethX label ethX:Y 
      

      For example:

      ip addr add 100.200.140.206/24 dev eth0 label eth0:1 

      Note:

      Ensure that the CIDR and interface to be used match the available network configuration in WCCHOST2.

  4. Update the routing tables by using arping, for example:

    arping -b -A -c 3 -I eth0 100.200.140.206
  5. From WCCHOST1, change directory to the Node Manager home directory:

    cd $NM_HOME
  6. Edit the nodemanager.domains file and remove the reference to ASERVER_HOME.

    The resulting entry in the WCCHOST1 nodemanager.domains file should appear as follows:

    wccedg_domain=MSERVER_HOME;
  7. From WCCHOST2, change directory to the Node Manager home directory:

    cd $NM_HOME
  8. Edit the nodemanager.domains file and add the reference to ASERVER_HOME.

    The resulting entry in the WCCHOST2 nodemanager.domains file should appear as follows:

    wccedg_domain=MSERVER_HOME;ASERVER_HOME
  9. Start the Node Manager on WCCHOST1 and restart the Node Manager on WCCHOST2.

  10. Start the Administration Server on WCCHOST2.

  11. Check that you can access the Administration Server on WCCHOST2 and verify the status of components in Fusion Middleware Control using the following URL:

    https://ADMINVHN:9002/em
Validating Access to the Administration Server on WCCHOST2 Through the Load Balancer

If you have configured the web tier to access AdminServer, it is important to verify that you can access the Administration Server after you perform a manual failover of the Administration Server, by using the standard administration URLs.

From the load balancer, access the following URLs to ensure that you can access the Administration Server when it is running on WCCHOST2:

  • https://admin.example.com:445/em

    Where, 445 is the port you use to access to the Fusion Middleware Control in the Load Balancer.

    This URL should display Oracle Enterprise Manager Fusion Middleware Control.

  • Verify that you can log into the WebLogic Remote Console through the provider you defined for this domain.

Failing the Administration Server Back to WCCHOST1 When Using a Per Host Node Manager

After you have tested a manual Administration Server failover, and after you have validated that you can access the administration URLs after the failover, you can then migrate the Administration Server back to its original host.

This procedure assumes that you have configured a per host Node Manager for the enterprise topology, as described in Creating a Per Host Node Manager Configuration. For more information, see About the Node Manager Configuration in a Typical Enterprise Deployment.

  1. Stop the Administration Server on WCCHOST2.
  2. Stop the Node Manager on WCCHOST2.
  3. Run the following command as root on WCCHOST2.
    ip addr del ADMINVHN/CIDR dev ethX
    
    For example:
    ip addr del 100.200.140.206/24 dev eth0
  4. Run the following command as root on WCCHOST1:
    ip addr add ADMINVHN/CIDR dev ethX label ethX:Y  
    For example:
    ip addr add 100.200.140.206/24 dev eth0 label eth0:1

    Note:

    Ensure that the CIDR and interface to be used match the available network configuration in WCCHOST1.

  5. Update the routing tables by using arping on WCCHOST1:
    arping -b -A -c 3 -I eth0 100.200.140.206
    
  6. From WCCHOST2, change directory to the Node Manager home directory:
    cd $NM_HOME
  7. Edit the nodemanager.domains file and remove the reference to ASERVER_HOME.
  8. From WCCHOST1, change directory to the Node Manager home directory:
    cd $NM_HOME
  9. Edit the nodemanager.domains file and add the reference to ASERVER_HOME.
  10. Start the Node Manager on WCCHOST2 and restart the Node Manager on WCCHOST1.
  11. Start the Administration Server on WCCHOST1.
  12. Test that you can use the WebLogic Remote Console to access the provider defined for this domain.
  13. Check that you can access and verify the status of components in the Oracle Enterprise Manager by using the following URL:
    https://ADMINVHN:9002/em
    https://admin.example.com:445/em

Modifying the Upload and Stage Directories to an Absolute Path in an Enterprise Deployment

After you configure the domain and unpack it to the Managed Server domain directories on all the hosts, verify and update the upload and stage directories for Managed Servers in the new clusters. Also, update the upload directory for the AdminServer to have the same absolute path instead of relative, otherwise deployment issues can occur.

This step is necessary to avoid potential issues when you perform remote deployments and for deployments that require the stage mode.

To update the directory paths for the Deployment Stage and Upload locations, complete the following steps:

  1. Log into the WebLogic Remote Console to access the provider of this domain.

  2. Open the Edit Tree.

  3. Expand Environment.

  4. Expand Servers.

  5. Click the name of the Managed Server you want to edit. Perform the following steps for each of the Managed Server:

    1. Click the Advanced tab.
    2. Click the Deployment tab.
    3. Verify that the Staging Directory Name is set to the following:

      MSERVER_HOME/servers/server_name/stage

      Replace MSERVER_HOME with the full path for the MSERVER_HOME directory.

      Update with the correct name of the Managed Server that you are editing.

    4. Update the Upload Directory Name to the following value:

      ASERVER_HOME/servers/AdminServer/upload

      Replace ASERVER_HOME with the directory path for the ASERVER_HOME directory.

    5. Click Save.
    6. Return to the Summary of Servers screen.

    Repeat the same steps for each of the new managed servers.

  6. Navigate to and update the Upload Directory Name value for the AdminServer:

    1. Navigate to Servers and select the AdminServer.
    2. Click the Advanced tab.
    3. Click the Deployment tab
    4. Verify that the Staging Directory Name is set to the following absolute path:

      ASERVER_HOME/servers/AdminServer/stage

    5. Update the Upload Directory Name to the following absolute path:

      ASERVER_HOME/servers/AdminServer/upload

      Replace ASERVER_HOME with the directory path for the ASERVER_HOME directory.

    6. Click Save.
  7. When you have modified all the appropriate objects, commit the changes in the shopping cart.

  8. Restart all the Servers for the changes to take effect. If you are following the EDG steps in-order and are not going to make any deployments immediately, you can wait until the next restart.

    Note:

    If you continue directly with further domain configurations, a restart to enable the stage and upload directory changes is not strictly necessary at this time.

About Using Third Party SSL Certificates in the WebLogic and Oracle HTTP Servers

This Oracle WebCenter Content Enterprise Deployment Topology uses SSL all the way from the external clients to the backend WebLogic Servers. The previous chapters in this guide provided scripts (generate_perdomainCACERTS.sh and generate_perdomainCACERTS-ohs.sh) to generate the required SSL certificates for the different FMW components.

These scripts generate the different SSL certificates using the WebLogic per domain Certification Authority in the WebLogic domain. These scripts also add the frontend’s SSL certificates to the trust keystore. However, in a production environment, you may want to use your own SSL certificates, issued by your own or by a 3rd party certificate authority. This section provides you some guidelines to configure the EDG system with this type of SSL certificates.

Using Third Party SSL Certificates in WebLogic Servers

Here are some guidelines about using custom or third party SSL certificates with the WebLogic Servers:

  • The SSL certificate used by each WebLogic server (identity key, private key) must be issued to that server’s listen address. For example, if the server WLS_PROD1 listens in apphost1.example.com, the CN of its SSL certificate must be that hostname or wildcard name valid for that hostname.

  • Oracle recommends using an identity keystore shared by all the servers in the same domain where you import all the private keys used by the different WebLogic servers each mapped to a different alias.

  • Oracle recommends using a trust keystore shared by all the servers in the domain. You must import the Certificate Authority’s certificate (and intermediate and root CA if needed) into this trust keystore.

  • You must specify the identity keystore, alias of the identity key and the trust keystore for each WebLogic server in the WebLogic domain’s configuration. Use WebLogic’s Remote Console to configure these SSL settings for each server.

  • Start the WebLogic servers using the appropriate java options to point to the trusted keystore so that they can communicate with external SSL endpoints that use the Certificate Authorities included in such a trust store.

The following commands are useful to manage SSL certificates in WebLogic.

  • Command to import an SSL certificate (a private key) into the identity keystore:

    Syntax

    
    WL_HOME/server/bin/setWLSEnv.sh
    
    java utils.ImportPrivateKey
          -certfile cert_file
          -keyfile private_key_file
          [-keyfilepass private_key_password]
          -keystore keystore
          -storepass storepass
          [-storetype storetype]
          -alias alias 
          [-keypass keypass]
    

    Example for a Certificate Issued to apphost1.example.com

    
    WL_HOME/server/bin/setWLSEnv.sh
    
    java utils.ImportPrivateKey \
    -certfile apphost1.example.com_cert.der \
    -keyfile apphost1.example.com_key.der \
    -keyfilepass keypassword \
    -storetype pkcs12 \
    -keystore CustomIdentityKeystore.pkcs12 \
    -storepass keystorepassword \
    -alias apphost1.example.com \
    -keypass keypassword
    
  • Command to import an SSL certificate (a trusted certificate) into the trusted keystore:

    Syntax

    
    keytool -import -v -noprompt -trustcacerts \
    -alias <alias_for_trusted_cert> \
    -file <certificate>.der \
    -storetype <keystoretype> \
    -keystore <customTrustKeyStore> \
    -storepass <keystorepassword>
    
    

    Example for Importing a CA Certificate

    
    keytool -import -v -noprompt -trustcacerts \
    -alias example_ca_cert \
    -file example_ca_cert.der \
    -storetype pkcs12 \
    -keystore CustomTrustKeyStore.pkcs12 \
    -storepass keystorepassword
    

    Example of the Java Options for Servers to Load Custom Trust Keystore

    
    EXTRA_JAVA_PROPERTIES="${EXTRA_JAVA_PROPERTIES}
          -Djavax.net.ssl.trustStore=/u01/oracle/config/keystores/CustomTrustKeyStore.pkcs12
          -Djavax.net.ssl.trustStorePassword=<keystorepassword>"
    export EXTRA_JAVA_PROPERTIES
Using Third Party SSL Certificates in Oracle HTTP Servers

Here are some guidelines to use your own SSL certificates in OHS:

  • Each OHS virtual host using SSL must use a wallet that contains only one private key. This private key will be used as the OHS server’s SSL certificate. It must be issued to the hostname in which the virtual host listens (the hostname value in the “VirtualHost” directive). The private key can also include other hostnames such as Subject Alternative Name (SAN) names (for example, the value of the “ServerName” directive). The virtual host must include the SSLWallet directive pointing to this wallet.

  • Different OHS virtual hosts can use the same SSLWallet (hence, the same private key), as long as they use the same hostname in the VirtualHost directive. The port can be different.

  • OHS acts as a client when it connects to the WebLogic servers. Hence, it must trust the certificate authority that issued the WebLogic’s certificates. Use the directive WLSSLWallet in the mod_wl_ohs.conf file to point to the appropriate wallet that contains the WebLogic certificates’ CA cert.

  • The frontend load balancer acts as a client when it connects to the OHS servers. It must trust the certificate authority that issued the certificates used by OHS. You must check your load balancer documentation to import the OHS’s CA as a trusted authority.

The following commands are useful to manage keys and wallets in OHS.

  • Command to create a wallet for OHS (orapki):

    Syntax

    
    $WEB_ORACLE_HOME/bin/orapki wallet create \
    -wallet wallet \
    -auto_login_only

    Example

    
    $WEB_ORACLE_HOME/bin/orapki wallet create \
    -wallet /u02/oracle/config/keystores/orapki/ \
    -auto_login_only
  • Command to add a private key to a wallet (orapki) from an identity keystore:

    Syntax

    
    $WEB_ORACLE_HOME/bin/orapki wallet jks_to_pkcs12 \
    -wallet wallet \
    -pwd pwd \
    -keystore keystore \
    -jkspwd keystorepassword 
    [-aliases [alias:alias..]]

    Example

    
    $WEB_ORACLE_HOME/bin/orapki wallet jks_to_pkcs12 \
    -wallet /u02/oracle/config/keystores/orapki/ \
    -keystore /u02/oracle/config/keystores/customIdentityKeyStore.pkcs12 \
    -jkspwd keystorepassword \
    -aliases ohshost1.example.com
  • Command to add all the trusted keys to a wallet (orapki) from a trusted keystore:

    Example

    
    $WEB_ORACLE_HOME/bin/orapki wallet jks_to_pkcs12 \
    -wallet /u02/oracle/config/keystores/orapki/ \
    -keystore /u02/oracle/config/keystores/customTrustKeyStore.pkcs12 \
    -jkspwd password
  • Command to list all the keys of a wallet (orapki):

    Example

    
    $WEB_ORACLE_HOME/bin/orapki wallet display \
    -wallet /u02/oracle/config/keystores/orapki/

Enabling SSL Communication Between the Middle Tier and SSL Endpoints

It is important to understand how to enable SSL communication between the middle tier and the front-end hardware load balancer or any other external SSL endpoints that needs to be accessed by the WebCenter Content WebLogic Server. For example, for external web services invocations, callbacks, and so on.

Note:

The following steps are applicable if the hardware load balancer is configured with SSL and the front-end address of the system has been secured accordingly.

When is SSL Communication Between the Middle Tier and Load Balancer Necessary?

In an enterprise deployment, there are scenarios where the software running on the middle tier must access the frontend SSL address of the hardware load balancer. In these scenarios, an appropriate SSL handshake must take place between the load balancer and the invoking servers. This handshake is not possible unless the Administration Server and Managed Servers on the middle tier are started by using the appropriate SSL configuration.

For example, the following examples are applicable in an Oracle WebCenter Content enterprise deployment:

  • Oracle SOA Suite composite applications and services often generate callbacks that need to perform invocations by using the SSL address exposed in the load balancer.

  • Oracle SOA Suite composite applications and services often access external webservices using SSL.

  • Finally, when you test a SOA Web services endpoint in Oracle Enterprise Manager Fusion Middleware Control, the Fusion Middleware Control software that is running on the Administration Server must access the load balancer frontend to validate the endpoint.

Generating Certificates, Identity Store, and Truststores

Since this Enterprise Deployment Guide uses end to end SSL (except in the access to the Database), certificates have already been generated in the different chapters using a per-domain CA. These have been already added to the pertaining Identity Stores and a Truststore has also been configured to include the per-domain CA. It is expected that through the use of the different generateCerts scripts provided, appropriate certificates exist already in these stores for the different listen addresses used by the WebLogic servers in the domain. On top of this, when the script generate_perdomainCACERTS-ohs.sh is executed, it traverses all the front-end addresses in the domain’s config.xml and adds its pertaining certificates to the trust store used by the domain. By adding these trust stores to the java properties used by the WebLogic Servers in the domain (-Djavax.net.ssl.trustStore and -Djavax.net.ssl.trustStorePassword), the appropriate SSL handshake is guaranteed when these WebLogic servers acts as client sin SSL invocations.

Importing Other External Certificates into the Truststore
Perform the following steps to add any other SSL end point’s certificates to the domain’s truststore. These may be external addresses or frontends in other WLS domains used by the applications in the WebCenter Content EDG one:
  1. Access the end point’s site on SSL with a browser (this adds the server's certificate to the browser's repository).
  2. Obtain the certificate from the site. For example, you can obtain a webservice site’s certificate using a browser such as Firefox. From the browser's certificate management tool, export the certificate to a file that is on the server's file system (with a file name such as site.webservice.com.crt). Alternatively, you can obtain the certificate using the openssl command. The syntax of the commands is as follows:
    
    openssl s_client -connect site.webservice.com -showcerts </dev/null 2>/dev/null|openssl x509 -outform PEM > $KEYSTORE_HOME/ site.webservice.com.crt
  3. Use the keytool to import the site’s certificate into the truststore:

    For example:

    
    keytool -import -file /oracle/certificates/site.webservice.com.crt -v -keystore appTrustKeyStore.pkcs12 -alias siteWS -storepass password
  4. Repeat this procedure for each SSL endpoint accessed by your WebLogic Servers.

    Note:

    The need to add the load balancer certificate to the WLS server truststore applies only to self-signed certificates. If the load balancer certificate is issued by a third-party CA, you have to import the public certificates of the root and the intermediate CAs into the truststore.
Adding the Updated Trust Store to the Oracle WebLogic Server Start Scripts
Since the trust store’s path was already added to the WebLogic start scripts in the chapter where the domain was created, no additional configuration is required. Simply ensure that the new trust store (with the CAs and/or certs for the SSL endpoints added) replaces the existing one.

Configuring Roles for Administration of an Enterprise Deployment

In order to manage each product effectively within a single enterprise deployment domain, you must understand which products require specific administration roles or groups, and how to add a product-specific administration role to the Enterprise Deployment Administration group.

Each enterprise deployment consists of multiple products. Some of the products have specific administration users, roles, or groups that are used to control administration access to each product.

However, for an enterprise deployment, which consists of multiple products, you can use a single LDAP-based authorization provider and a single administration user and group to control access to all aspects of the deployment. See Creating a New LDAP Authenticator and Provisioning a New Enterprise Deployment Administrator User and Group.

To be sure that you can manage each product effectively within the single enterprise deployment domain, you must understand which products require specific administration roles or groups, you must know how to add any specific product administration roles to the single, common enterprise deployment administration group, and if necessary, you must know how to add the enterprise deployment administration user to any required product-specific administration groups.

For more information, see the following topics.

Adding a Product-Specific Administration Role to the Enterprise Deployment Administration Group

For products that require a product-specific administration role, use the following procedure to add the role to the enterprise deployment administration group:

  1. Sign-in to the Fusion Middleware Control by using the administrator's account (for example: weblogic_wcc), and navigate to the home page for your application.

    These are the credentials that you created when you initially configured the domain and created the Oracle WebLogic Server Administration user name (typically, weblogic_wcc) and password.

  2. From the WebLogic Domain menu, select Security, and then Application Roles.
  3. For each production-specific application role, select the corresponding application stripe from the Application Stripe drop-down menu.
  4. Click Search Application Roles icon Search icon to display all the application roles available in the domain.
  5. Select the row for the application role that you are adding to the enterprise deployment administration group.
  6. Click the Edit icon Application Role Edit icon to edit the role.
  7. Click the Add icon Application Role Add icon on the Edit Application Role page.
  8. In the Add Principal dialog box, select Group from the Type drop-down menu.
  9. Search for the enterprise deployment administrators group, by entering the group name (for example, WCCAdministrators) in the Principal Name Starts With field and clicking the right arrow to start the search.
  10. Select the administrator group in the search results and click OK.
  11. Click OK on the Edit Application Role page.
Adding the Enterprise Deployment Administration User to a Product-Specific Administration Group

For products with a product-specific administration group, use the following procedure to add the enterprise deployment administration user (weblogic_wcc to the group. This allows you to manage the product by using the enterprise manager administrator user:

  1. Create an ldif file called product_admin_group.ldif similar to the following:
    dn: cn=product-specific_group_name, cn=groups, dc=us, dc=oracle, dc=com
    displayname: product-specific_group_display_name
    objectclass: top
    objectclass: groupOfUniqueNames
    objectclass: orclGroup
    uniquemember: cn=weblogic_wcc,cn=users,dc=us,dc=oracle,dc=com
    cn: product-specific_group_name
    description: Administrators Group for the Domain
    

    Replace product-specific_group_display_name with the display name for the group that appears in the management console for the LDAP server and in the Oracle WebLogic Remote Console.

  2. Use the ldif file to add the enterprise deployment administrator user to the product-specific administration group.

    For Oracle Unified Directory:

    OUD_INSTANCE_HOME/bin/ldapmodify -a 
                                     -D "cn=Administrator" 
                                     -X 
                                     -p 1389 
                                     -f product_admin_group.ldif
    

    For Oracle Internet Directory:

    OID_ORACLE_HOME/bin/ldapadd -h oid.example.com 
                                -p 389 
                                -D cn="orcladmin" 
                                -w <password> 
                                -c 
                                -v 
                                -f product_admin_group.ldif

Using Persistent Stores for TLOGs and JMS in an Enterprise Deployment

The Oracle WebLogic persistent store framework provides a built-in, high-performance storage solution for WebLogic Server subsystems and services that require persistence.

For example, the JMS subsystem stores persistent JMS messages and durable subscribers, and the JTA Transaction Log (TLOG) stores information about the committed transactions that are coordinated by the server but may not have been completed. The persistent store supports persistence to a file-based store or to a JDBC-enabled database. Persistent stores’ high availability is provided by server or service migration. Server or service migration requires that all members of a WebLogic cluster have access to the same transaction and JMS persistent stores (regardless of whether the persistent store is file-based or database-based).

For an enterprise deployment, Oracle recommends using JDBC persistent stores for transaction logs (TLOGs) and JMS.

This section analyzes the benefits of using JDBC versus File persistent stores and explains the procedure for configuring the persistent stores in a supported database. It needs to be noted that the configuration wizard steps provided in the different chapters in this book will already create JDBC persistent stores for the components used. Use the manual steps below for custom stores or for transitioning to JDBC stores from file stores.

Using JDBC Persistent Stores for TLOGs and JMS in an Enterprise Deployment

This section explains the guidelines to use JDBC persistent stores for transaction logs (TLOGs) and JMS. It also explains the procedures to configure the persistent stores in a supported database.

Note:

Remember that the steps provided for setting up the different components in this EDG (using the configuration wizard) is already configured in JDBC persistent stores for them. Use the following steps for custom persistent stores or when reconfiguring from file stores to JDBC stores (migration of messages from file to JDBC is out of the scope of this EDG).
Recommendations for TLOGs and JMS Datasource Consolidation

To accomplish data source consolidation and connection usage reduction, use a single connection pool for both JMS and TLOGs persistent stores.

Oracle recommends you to reuse the WLSRuntimeSchemaDataSource as is for TLOGs and JMS persistent stores under non-high workloads and consider increasing the WLSRuntimeSchemaDataSource pool size. Reuse of datasource forces to use the same schema and tablespaces, and so the PREFIX_WLS_RUNTIME schema in the PREFIX_WLS tablespace is used for both TLOGs and JMS messages.

High stress (related with high JMS activity, for example) and contention in the datasource can cause stability and performance problems. For example:
  • High contention in the DataSource can cause persistent stores to fail if no connections are available in the pool to persist JMS messages.

  • High Contention in the DataSource can cause issues in transactions if no connections are available in the pool to update transaction logs.

For these cases, use a separate datasource for TLOGs and stores and a separate datasource for the different stores. You can still reuse the PREFIX_WLS_RUNTIME schema but configure separate custom datasources to the same schema to solve the contention issue.

Roadmap for Configuring a JDBC Persistent Store for TLOGs

The following topics describe how to configure a database-based persistent store for transaction logs.

  1. Creating a User and Tablespace for TLOGs

  2. Creating GridLink Data Sources for TLOGs and JMS Stores

  3. Assigning the TLOGs JDBC Store to the Managed Servers

Note:

Steps 1 and 2 are optional. To accomplish data source consolidation and connection usage reduction, you can reuse PREFIX_WLS tablespace and WLSRuntimeSchemaDataSource as described in Recommendations for TLOGs and JMS Datasource Consolidation.

Roadmap for Configuring a JDBC Persistent Store for JMS

The following topics describe how to configure a database-based persistent store for JMS.

  1. Creating a User and Tablespace for JMS

  2. Creating GridLink Data Sources for TLOGs and JMS Stores

  3. Creating a JDBC JMS Store

  4. Assigning the JMS JDBC store to the JMS Servers

  5. Creating the Required Tables for the JMS JDBC Store

Note:

Steps 1 and 2 are optional. To accomplish data source consolidation and connection usage reduction, you can reuse PREFIX_WLS tablespace and WLSRuntimeSchemaDataSource as described in Recommendations for TLOGs and JMS Datasource Consolidation.

Creating a User and Tablespace for TLOGs

Before you can create a database-based persistent store for transaction logs, you must create a user and tablespace in a supported database.

  1. Create a tablespace called tlogs.

    For example, log in to SQL*Plus as the sysdba user and run the following command:

    SQL> create tablespace tlogs
            logging datafile 'path-to-data-file-or-+asmvolume'
            size 32m autoextend on next 32m maxsize 2048m extent management local;
    
  2. Create a user named TLOGS and assign to it the tlogs tablespace.

    For example:

    SQL> create user TLOGS identified by password;
    
    SQL> grant create table to TLOGS;
    
    SQL> grant create session to TLOGS;
    
    SQL> alter user TLOGS default tablespace tlogs;
    
    SQL> alter user TLOGS quota unlimited on tlogs;
Creating a User and Tablespace for JMS

Before you can create a database-based persistent store for JMS, you must create a user and tablespace in a supported database.

  1. Create a tablespace called jms.

    For example, log in to SQL*Plus as the sysdba user and run the following command:

    SQL> create tablespace jms
            logging datafile 'path-to-data-file-or-+asmvolume'
            size 32m autoextend on next 32m maxsize 2048m extent management local;
    
  2. Create a user named JMS and assign to it the jms tablespace.

    For example:

    SQL> create user JMS identified by password;
    
    SQL> grant create table to JMS;
    
    SQL> grant create session to JMS;
    
    SQL> alter user JMS default tablespace jms;
    
    SQL> alter user JMS quota unlimited on jms;
    
Creating GridLink Data Sources for TLOGs and JMS Stores

Before you can configure database-based persistent stores for JMS and TLOGs, you must create two data sources: one for the TLOGs persistent store and one for the JMS persistent store.

For an enterprise deployment, you should use GridLink data sources for your TLOGs and JMS stores. To create a GridLink data source:

  1. Log into the WebLogic Remote Console.
  2. Navigate to the Edit Tree.
  3. In the structure tree, expand Services and select Data Sources.
  4. In the Summary of Data Sources page, click New and select GridLink Data Source. Enter the following:

    Table 18-1 GridLink Data Source Properties

    Properties Description
    Name Enter a logical name for the data source in the Name field. For example, Leasing.
    JNDI Names Enter a name for JNDI. For example, for the TLOGs store enter jdbc/tlogs. For the JMS store, enter jdbc/jms.
    Targets Select the cluster that is using the persistent store and move to "Chosen".
    Data Source Type Select GridLink Data Source.
    Database Driver Select Oracle's Driver (Thin) for GridLink Connections Versions: Any.
    Global Transaction Protocol Select None.
    Listeners Enter the SCAN address and port for the RAC database, separated by a colon. For example, db-scan.example.com:1521.
    Service Name Enter the service name of the database with lowercase characters. For a GridLink data source, you must enter the Oracle RAC service name. For example, wccedg.example.com.
    Database username Enter the user name. For example, for the TLOGs store, enter TLOGS. For the JMS persistent store, enter JMS.
    Password Enter the password that you used when you created the user in the database.
    Protocol Leave the default value (TCP).
    Fan Enabled This property must be checked.
    ONS Nodes You can leave this field empty. ONS node list is automatically retrieved when the database is 12.2 or higher version.
    ONS Wallet and password You can leave this field empty.
    Test Configuration You must enable this option.
  5. Click Create.
  6. Commit changes in the shopping cart.
  7. Repeat Step 4 to Step 6 to create the GridLink Data Source for JMS File Stores.
Assigning the TLOGs JDBC Store to the Managed Servers

If you are going to accomplish data source consolidation, you will reuse the <PREFIX>_WLS tablespace and WLSRuntimeSchemaDataSource for the TLOG persistent store. Otherwise, ensure that you create the tablespace and user in the database, and you have created the datasource before you assign the TLOG store to each of the required Managed Servers.

  1. Log into the Oracle WebLogic Remote Console.
  2. In the Edit Tree, navigate to Environment > Servers.
  3. Click the name of the Managed Server.
  4. Select the Services > JTA tab.
  5. Enable Transaction Log Store in JDBC.
  6. In the Data Source menu, select WLSSchemaRuntimeDatasource to accomplish data source consolidation. The <PREFIX>_WLS tablespace will be used for TLOGs.
  7. In the Transaction Log Prefix Name field, specify a prefix name to form a unique JDBC TLOG store name for each configured JDBC TLOG store.
  8. Click Save.
  9. Repeat step 2 to step 7 for each additional managed server.
  10. To activate these changes, commit the changes in the shopping cart.
Creating a JDBC JMS Store

After you create the JMS persistent store user and table space in the database, and after you create the data source for the JMS persistent store, you can then use the WebLogic Remote Console to create the store.

  1. Log into the WebLogic Remote Console.
  2. Navigate to the Edit Tree.
  3. In the structure tree, expand Services and select JDBC Stores.
  4. Click New.
  5. Enter a persistent store name that easily relates it to the pertaining JMS servers that is using it.

    Note:

    To accomplish data source consolidation, select WLSRuntimeSchemaDataSource. The <PREFIX>_WLS tablespace is used for JMS persistent stores.
  6. Target the store to the migratable target to which the JMS server belongs.
  7. Repeat Step 3 to Step 7 for each additional JMS server in the cluster.
  8. Commit changes in the shopping cart.
Assigning the JMS JDBC store to the JMS Servers

After you create the JMS tablespace and user in the database, create the JMS datasource, and create the JDBC store, then you can assign the JMS persistence store to each of the required JMS Servers.

To assign the JMS persistence store to the JMS servers:
  1. Log into the WebLogic Remote Console.
  2. Navigate to the Edit Tree.
  3. In the structure tree, expand Services > Messaging > JMS Servers.
  4. Click the name of the JMS Server that you want to use the persistent store.
  5. In the Persistent Store property, select the JMS persistent store you created.
  6. Click Save.
  7. Repeat Step 3 to Step 6 for each of the additional JMS Servers in the cluster.
  8. To activate these changes, commit changes in the shopping cart.
Creating the Required Tables for the JMS JDBC Store

The final step in using a JDBC persistent store for JMS is to create the required JDBC store tables. Perform this task before you restart the Managed Servers in the domain.

  1. Review the information in unresolvable-reference.html, and decide which table features are appropriate for your environment.

    There are three Oracle DB schema definitions provided in this release and were extracted for review in the previous step. The basic definition includes the RAW data type without any partition for indexes. The second uses the blob data type, and the third uses the blob data type and secure files.

  2. Create a domain-specific well-named folder structure for the custom DDL file on shared storage. The ORACLE_RUNTIME shared volume is recommended so it is available to all servers.

    Example:

    mkdir -p ORACLE_RUNTIME/domain_name/ddl
  3. Create a jms_custom.ddl file in new shared ddl folder based on your requirements analysis.
    For example, to implement an optimized schema definition that uses both secure files and hash partitioning, create the jms_custom.ddl file with the following content:
    CREATE TABLE $TABLE (
      id     int  not null,
      type   int  not null,
      handle int  not null,
      record blob not null,
    PRIMARY KEY (ID) USING INDEX GLOBAL PARTITION BY HASH (ID) PARTITIONS 8)
    LOB (RECORD) STORE AS SECUREFILE (ENABLE STORAGE IN ROW);

    This example can be compared to the default schema definition for JMS stores, where the RAW data type is used without any partitions for indexes.

    Note that the number of partitions should be a power of two. This ensures that each partition is of similar size. The recommended number of partitions varies depending on the expected table or index growth. You should have your database administrator (DBA) analyze the growth of the tables over time and adjust the tables accordingly. See Partitioning Concepts in Database VLDB and Partitioning Guide.

  4. Use the Remote Console to edit the existing JDBC Store you created earlier; create the table that is used for the JMS data:
    1. Log into the WebLogic Remote Console.
    2. Navigate to the Edit Tree.
    3. In the structure tree, expand Services and select JDBC stores.
    4. Click the persistent store you created earlier.
    5. Click Show Advanced Fields.
    6. Under the Advanced options, enter ORACLE_RUNTIME/domain_name/ddl/jms_custom.ddl in the Create Table from DDL File field.
    7. Click Save.
    8. To activate these changes, commit changes in the shopping cart.
  5. Restart the Managed Servers.

About JDBC Persistent Stores for Web Services

By default, web services use the WebLogic Server default persistent store for persistence. This store provides high-performance storage solution for web services.

The default web service persistence store is used by the following advanced features:
  • Reliable Messaging

  • Make Connection

  • SecureConversation

  • Message buffering

You also have the option to use a JDBC persistence store in your WebLogic Server web service, instead of the default store. For information about web service persistence, see Managing Web Service Persistence.

Best Configuration Practices When Using RAC and Gridlink Data Sources

Oracle recommends that you use GridLink data sources when you use an Oracle RAC database. If you follow the steps described in the Enterprise Deployment guide, the data sources will be configured as GridLink.

GridLink data sources provide dynamic load balancing and failover across the nodes in an Oracle Database cluster, and also receive notifications from the RAC cluster when nodes are added or removed. For more information about GridLink data sources, see Using Active GridLink Data Sources in Administering JDBC Data Sources for Oracle WebLogic Server.

Here is a summary of the best practices when using GridLink to connect to the RAC database:

  • Use a database service (defined with srvctl) different from the default database service

    In order to receive and process notifications from the RAC database, the GridLink needs to connect to a database service (defined with srvctl) instead to a default database service. These services monitor the status of resources in the database cluster and generate notifications when the status changes. A database service is used in Enterprise Deployment guide, created and configured as described in Creating Database Services.

  • Use the long format database connect string in the data sources

    When Gridlink data sources are used, the long format database connect string must be used. The Configuration Wizard does not set the long format string, it sets the short format instead. You can modify it manually later to set the long format. To update the data sources:

    1. Connect to the WebLogic Remote Console and navigate to Domain Structure > Services > Datasources.
    2. Select a data source, click the Configuration tab, and then click the Connection Pool tab.
    3. Within the JDBC URL, change the URL from jdbc:oracle:thin:[SCAN_VIP]:[SCAN_PORT]/[SERVICE_NAME] to jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=[SCAN_VIP])(PORT=[SCAN_PORT])))(CONNECT_DATA=(SERVICE_NAME=[SERVICE_NAME])))
      For example:
      jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=db-scan-address)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=wccedg.example.com)))
  • Use auto-ons

    The ONS connection list is automatically provided from the database to the driver. You can leave the ONS Nodes list empty in the data sources configuration.

  • Test Connections On Reserve

    Verify that the Test Connections On Reserve is checked in the data sources.

    Eventhough the GridLink data sources receive FAN events when a RAC instances becomes unavailable, it is a best practice to enable the Test Connections On Reserve in the data source and ensure that the connection returned to the application is good.

  • Seconds to Trust an Idle Pool Connection

    For a maximum efficiency of the test, you can also set Seconds to Trust an Idle Pool Connection to 0, so the connections are always verified. Setting this value to zero means that all the connections returned to the application will be tested. If this parameter is set to 10, the result of the previous test will be valid for 10 seconds and if a connection is reused before the lapse of 10 seconds, the result will still be valid.

  • Test Frequency

    Verify that the Test Frequency parameter value in the data sources is not 0. This is the number of seconds a WebLogic Server instance waits between attempts when testing unused connections. The default value of 120 is normally enough.

Using TNS Alias in Connect Strings

You can create an alias to map the URL information instead of specifying long database connection strings in the jdbc connection pool of a datasource. The connection string information is stored in a tnsnames.ora file with an associated alias name. This alias is used in the connect string of the connection pool.

The following example is of a connect string using tns alias.

jdbc:oracle:thin:@wccedg_alias

The tnsnames.ora file contains the following details.


wccedg_alias =
        (DESCRIPTION=
        (ADDRESS_LIST=
            (LOAD_BALANCE=ON)
            (ADDRESS=(PROTOCOL=TCP)(HOST=wccedgdb-scan)(PORT=1521)))
            (CONNECT_DATA=(SERVICE_NAME=wccedg.example.com))
        )

You must specify the oracle.net.tns_admin property in the datasource configuration to point to a specific tnsnames.ora file. For example, <property><name>oracle.net.tns_admin</name><value>/u01/oracle/config/domains/fmw1412edg/config/tnsadmin</value></property></properties>

This is the Maximum Availability and Enterprise Deployment recommended approach for JDBC urls. It simplifies JDBC configurations, facilitates DB configuration aliasing in disaster protection scenarios, and makes database connection changes more dynamic. For more information, see Use a TNS Alias Instead of a DB Connection String in Administering JDBC Data Sources for Oracle WebLogic Server.

In Oracle Fusion Middleware 14.1.2, you can use a new type of deployment module to manage the tnsnames.ora files, wallet files, and keystore and truststore files associated with a database connection. These are called DBClientData modules. For more information, see What Are DBClientData Modules in Administering JDBC Data Sources for Oracle WebLogic Server. In this EDG, DBClientData type of module is used to maintain the database client information. However, wallets and SSL configuration is not used to access the database so the DBClientData module contains only the appropriate tnsnames.ora.

The following steps are required to use a TNS alias in the different Datasources used by FMW and WLS schemas:

  1. Create a tnsnames.ora with the pertaining alias and mapping URLs used in the connection pools. Copy the connect string from one of the existing datasource configuration files. For example,

    Note:

    This is an example using the short jdbc URL.
    
    [oracle@soahost1~]$  grep url  /u01/oracle/config/domains/wccedgdomain/config/jdbc/opss-datasource-jdbc.xml
        <url>jdbc:oracle:thin:@drdbrac12a-scan.dbsubnet.vcnlon80.oraclevcn.com:1521/wccedg.example.com</url>
    [oracle@soahost1~]$

    Use the information in the connect string to add a long URL entry to a tnsnames.ora file. Use an alias name that identifies your connection. Notice that in order to deploy the tsnnames.ora as DBCLient module the location of the deployment module needs to be two levels down under the domain config directory if it resides on the WLS Administration Server node. The file can also be created in the node that runs the WebLogic Remote Console and can also be uploaded (as an application ear or war file).

    
    [oracle@soahost1~]$  cat /u01/oracle/config/tnsadmin/tnsnames.ora
    
    wccedg_alias =
            (DESCRIPTION=
            (ADDRESS_LIST=
                (LOAD_BALANCE=ON)
                (ADDRESS=(PROTOCOL=TCP)(HOST= drdbrac12a-scan.dbsubnet.vcnlon80.oraclevcn.com)(PORT=1521)))
                (CONNECT_DATA=(SERVICE_NAME=wccedg.example.com))
            )
  2. Deploy the directory containing the tnsnames.ora as a DBClientData module.

    1. Access the domain provider in the WebLogic Remote Console.

    2. Click Edit Tree.

    3. Click Environment > Deployments >Database Client Data Directories.

    4. Click New.

    5. Enter a name for the dbclient directory deployment. For example, dbclientdata_modulename.

      If the directory containing the tnsnames.ora file resides on your local computer, uncheck the Upload checkbox.

    6. Click Create.

    7. Click Save.

      The cart on the top right part of the screen will display full with a yellow bag inside.

    8. Click the Cart icon and select Commit Changes.

      This will create a tnsnames/dbclient module under domain dir /u01/oracle/config/domains/wccedgdomain/config/ dbclientdata/dbclientdata_modulename.

      You can also perform the deployment of a database client module using the deploy command in wlst.

  3. Update the different Datasources and fmwconfig files to use the alias instead of the explicit URLS.

    Note:

    To update a datasource to use the tns alias, the datasource configuration needs to include both a pointer to the tsnames.ora file and the alias itself in the jdbc URL.

    You must perform the following steps to include a pointer to the tnsnames.ora file include the property oracle.net.tns_admin in the datasource properties.

    1. Access the domain provider in the WebLogic Remote Console.

    2. Click Edit Tree.

    3. Click Services > Datasources > Datasource_name.

    4. In the navigation tree on the left, select Properties for the precise Datasource.

    5. Click New.

    6. Enter oracle.net.tns_admin as the property name.

    7. Click Create.

    8. In the next screen with the property details, enter as value the directory for the dbclientdata_modulename that is /u01/oracle/config/domains/wccedgdomain/config/ dbclientdata/dbclientdata_modulename in the example above.

    9. Click Save.

      The cart on the top right part of the screen will display full with a yellow bag inside.

    10. In the navigation tree on the left, click the Datasource name.

    11. Select the Connection Pool tab.

    12. In the URL, replace the URL with the alias syntax as shown below:

      jdbc:oracle:thin:@wccedg_alias

    13. Click Save.

    14. Click the Cart icon and select Commit Changes.

      If you check the datasource configuration file, it should reflect the following under the <jdbc-driver-params> <properties> entries:

      
      <property>
      <name>oracle.net.tns_admin</name>
      <value>/u01/oracle/config/domains/wccedgdomain/config/dbclientdata/dbclientdata_modulename</value>
      </property>

      The datasource configuration file should reflect as JDBC URL under <jdbc-driver-params> as shown below:

      <url>jdbc:oracle:thin:@wccedg_alias</url>

  4. To update the FMW jps config to use the tns alias, the domain_path/config/fmwconfig/jps-config.xml and domain_path/config/fmwconfig/jps-config-jse.xml files need to be updated and both a pointer to the tsnames.ora file and the alias itself must be included in the jdbc url, that is replace the information in the propertySet for the DB with the updated URL and the tnsadmin pointer.

    
    <property name="oracle.net.tns_admin" value="/u01/oracle/config/domains/wccedgdomain/config/dbclientdata/dbclientdata_modulename "/>
    <property name="jdbc.url" value="jdbc:oracle:thin:@wccedg_alias "/>

Restart the Administration Server for all the changes to be applied.

Alternatively, you can use the https://github.com/oracle-samples/maa/tree/main/1412EDG/fmw1412_change_to_tns_alias.sh script instead of the steps 1, 2, 3 and 4 to deploy the corresponding DBClientData module and replace all urls in the jdbc and jps configuration with the pertaining alias.

However, using the script is only recommended when all domain extensions have been completed and all the required datasources are present in the domain configuration because the script is configured to exit if an existing tnsadmin already exists in the configuration files. This behavior is intentional to avoid conflicts with other DBClient modules in the domain.

Performing Backups and Recoveries for an Enterprise Deployment

It is recommended that you follow the below mentioned guidelines to make sure that you back up the necessary directories and configuration data for an Oracle WebCenter Content enterprise deployment.

Note:

Some of the static and runtime artifacts listed in this section are hosted from Network Attached Storage (NAS). If possible, backup and recover these volumes from the NAS filer directly rather than from the application servers.

For general information about backing up and recovering Oracle Fusion Middleware products, see the following sections in Administering Oracle Fusion Middleware:

Table 18-2 lists the static artifacts to back up in a typical Oracle WebCenter Content enterprise deployment.

Table 18-2 Static Artifacts to Back Up in the Oracle WebCenter Content Enterprise Deployment

Type Host Tier

Database Oracle home

DBHOST1 and DBHOST2

Data Tier

Oracle Fusion Middleware Oracle home

WEBHOST1 and WEBHOST2

Web Tier

Oracle Fusion Middleware Oracle home

WCCHOST1 and WCCHOST2 (or NAS Filer)

Application Tier

Installation-related files

WEBHOST1, WEHOST2, and shared storage

N/A

Table 18-3 lists the runtime artifacts to back up in a typical Oracle WebCenter Content enterprise deployment.

Table 18-3 Run-Time Artifacts to Back Up in the Oracle WebCenter Content Enterprise Deployment

Type Host Tier

Administration Server domain home (ASERVER_HOME)

WCCHOST1 (or NAS Filer)

Application Tier

Application home (APPLICATION_HOME)

WCCHOST1 (or NAS Filer)

Application Tier

Oracle RAC databases

DBHOST1 and DBHOST2

Data Tier

Scripts and Customizations

Per host

Application Tier

Deployment Plan home (DEPLOY_PLAN_HOME)

WCCHOST1 (or NAS Filer)

Application Tier

OHS Configuration directory

WEBHOST1 and WEBHOST2

Web Tier

Online Domain Run-Time Artifacts Backup/Recovery Example

This section describes an example procedure to implement a backup of the WebLogic domain artifacts. This approach can be used during the EDG configuration process, for example, before extending the domain to add a new component.

This example has the following features:

  • App tier Runtime Artifacts are backed up/recovered in this example:
    Artifact Host Tier

    Administration Server domain home (ASERVER_HOME)

    WCCHOST1 (or NAS Filer)

    Application Tier

    Application home (APPLICATION_HOME)

    WCCHOST1 (or NAS Filer)

    Application Tier

    Deployment Plan home (DEPLOY_PLAN_HOME)

    WCCHOST1 (or NAS Filer)

    Application Tier

    Runtime artifacts (adapter control files) (ORACLE_RUNTIME)

    WCCHOST1 (or NAS Filer)

    Application Tier

    Scripts and Customizations

    Per host

    Application Tier

  • This backup procedure is suitable for cases when a major configuration change is done to the domain (that is, domain extension). If something goes wrong, or if you make incorrect selections, you can restore the domain configuration to the earlier state.

    Database backup/restore is not mandatory for this sample procedure, but steps to backup/restore the database are included as optional.

    Artifact Host Tier

    Oracle RAC database (optional)

    Oracle RAC database (optional)

    Data Tier

  • Operating system tools are used in this example. Some of the run-time artifacts listed in this section are hosted from Network Attached Storage (NAS). If possible, do the backup and recovery of these volumes from the NAS filer directly rather than from the application servers.
  • Managed servers are running during the backup. MSERVER_HOME is not backed up and pack/unpack procedure is used later to recover MSERVER_HOME. Therefore, managed server lock files are not included in the backup.
  • AdminServer can be running during the backup if .lok files are excluded from the backup. To avoid an inconsistent backup, do not make any configuration changes until the backup is complete. To ensure that no changes are made in the WebLogic Server domain, you can lock the WebLogic Server configuration.

    Note:

    Excluding these:
    • AdminServer/data/ldap/ldapfiles/EmbeddedLDAP.lok
    • AdminServer/tmp/AdminServer.lok
Back Up the Domain Run-Time Artifacts

To backup the domain runtime artifacts, perform the following steps:

  1. Log in to WCCHOST1 with user oracle and ensure that you define and export the following variables:
    Variable Example Value Description

    BAK_TAG

    BEFORE_BPM

    Descriptive tag used in the names of the backup files and database restore point.

    BAK_DIR

    /backups

    Host folder where backup files are stored.

    DOMAIN_NAME

    wccedg_domain

    Domain name

    For example:
    export BAK_TAG=BEFORE_BPM
    export DOMAIN_NAME=wccedg_domain
    export BAK_DIR=/backups
  2. Ensure that the following domain variables are set with the values of the domain:
    Variable Example Value

    ASERVER_HOME

    /u01/oracle/config/domains/wccedg_domain

    DEPLOY_PLAN_HOME

    /u01/oracle/config/dp

    APPLICATION_HOME

    /u01/oracle/config/applications/wccedg_domain

    ORACLE_RUNTIME

    /u01/oracle/runtime

    See Table 7-2.

  3. Before you make the backup, lock the domain configuration, so you prevent other accounts from making changes during your edit session. To lock the domain configuration from Fusion Middleware Control:
    1. Log in to https://admin.example.com:445/em.
    2. Locate the Change Center at the top of Fusion Middleware Control.
    3. From the Changes menu, select Lock & Edit to lock the configuration edit for the domain.

    Note:

    To avoid an inconsistent backup, do not make any configuration changes until the backup is complete.
  4. Log in to WCCHOST1 and clean the logs and backups applications before the backup:
    find ${ASERVER_HOME}/servers/AdminServer/logs -type f -name "*.out0*" ! -size 0c -print -exec rm -f {} \+
    find ${ASERVER_HOME}/servers/AdminServer/logs -type f -name "*.log0*" ! -size 0c -print -exec rm -f {} \+
    find ${APPLICATION_HOME} -type f -name "*.bak*" -print -exec rm -f {} \;
  5. Perform the backup of each artifact by using tar:
    tar -cvzf  ${BAK_DIR}/backup_aserver_home_${DOMAIN_NAME}_${BAK_TAG}.tgz   ${ASERVER_HOME} --exclude ".lok" 
    
    tar -cvzf ${BAK_DIR}/backup_dp_home_${DOMAIN_NAME}_${BAK_TAG}.tgz    ${DEPLOY_PLAN_HOME}/${DOMAIN_NAME} 
    
    tar -cvzf ${BAK_DIR}/backup_app_home_${DOMAIN_NAME}_${BAK_TAG}.tgz   ${APPLICATION_HOME}  
    
    tar -cvzf ${BAK_DIR}/backup_runtime_${DOMAIN_NAME}_${BAK_TAG}.tgz         ${ORACLE_RUNTIME}/${DOMAIN_NAME} 
    
    ls --format=single-column ${BAK_DIR}/backup_aserver_*.tgz
    ls --format=single-column ${BAK_DIR}/backup_dp_*.tgz
    ls --format=single-column ${BAK_DIR}/backup_app_*.tgz
    ls --format=single-column ${BAK_DIR}/backup_runtime_*.tgz
  6. Release the domain lock.
    1. Log in to https://admin.example.com:445/em.
    2. Locate the Change Center at the top of Fusion Middleware Control.
    3. From the Changes menu, select Release Configuration to release the configuration edit for the domain.
  7. Backup your scripts and customizations, if needed.
  8. (Optional) Log in to the database and create a flashback database restore point:

    Note:

    Flash database technology is used in this example for database recovery. Check your database version’s documentation for more information about Flashback.
    1. Create flashback guaranteed checkpoint.
      sqlplus / as sysdba
      SQL> create restore point BEFORE_BPM guarantee flashback database;
      SQL> alter system switch logfile;
    2. Verify.
      SQL> set linesize 300
      SQL> column name format a30
      SQL> column time format a32
      SQL> column storage_size format 999999999999
      SQL> SELECT name, guarantee_flashback_database, time, storage_size FROM v$restore_point ORDER BY time;
      
      Example:
      NAME                           GUA TIME                              STORAGE_SIZE
      ------------------------------ --- -------------------------------- -------------
      SOAEDG_BEFORE_BPM              YES 12-MAY-17 03.29.28.000000000 AM     8589934592
      exit
Restore the Domain Run-Time Artifacts
To recover the domain to the point where the backups where made, follow these steps:
  1. Log in to WCCHOST1 using the oracle user.
  2. Stop all the servers in the domain, including the AdminServer.
  3. Ensure that the following domain variables are set with the values of the domain:
    Variable Example Value

    ASERVER_HOME

    /u01/oracle/config/domains/wccedg_domain

    DEPLOY_PLAN_HOME

    /u01/oracle/config/dp

    APPLICATION_HOME

    /u01/oracle/config/applications/wccedg_domain

    ORACLE_RUNTIME

    /u01/oracle/runtime

  4. Remove the current folders by renaming them. You can remove these folders completely at the end of the process after you have verified the recovered domain.
    1. In WCCHOST1:
      mv  ${ASERVER_HOME}  ${ASERVER_HOME}_DELETE
      mv  ${DEPLOY_PLAN_HOME}/${DOMAIN_NAME}  ${DEPLOY_PLAN_HOME}/${DOMAIN_NAME}_DELETE
      mv  ${APPLICATION_HOME}   ${APPLICATION_HOME}_DELETE
      mv  ${ORACLE_RUNTIME}/${DOMAIN_NAME} ${ORACLE_RUNTIME}/${DOMAIN_NAME}_DELETE
    2. In each WCCHOSTn:
      mv   ${MSERVER_HOME}  ${MSERVER_HOME}_DELETE
  5. Locate and identify the backups in the backup folder. Ensure that you define and export the following variables with the correct values of the backup you want to recover:
    Variable Example Value Description

    BAK_TAG

    BEFORE_BPM

    Descriptive tag used in the names of the backup files and database restore point.

    BAK_DIR

    /backups

    Host folder where backup files are stored.

    DOMAIN_NAME

    wccedg_domain

    Domain name

    For example:
    export BAK_TAG=BEFORE_BPM
    export DOMAIN_NAME=wccedg_domain
    export BAK_DIR=/backups
  6. Perform the recovery of the files by extracting the files.

    Note:

    TAR files will recreate the structure beginning with /, so you need to go to / folder.
    cd  /
    tar -xzvf ${BAK_DIR}/backup_aserver_home_${DOMAIN_NAME}_${BAK_TAG}.tgz   
    tar -xzvf ${BAK_DIR}/backup_dp_home_${DOMAIN_NAME}_${BAK_TAG}.tgz    
    tar -xzvf ${BAK_DIR}/backup_app_home_${DOMAIN_NAME}_${BAK_TAG}.tgz   
    tar -xzvf ${BAK_DIR}/backup_runtime_${DOMAIN_NAME}_${BAK_TAG}.tgz
  7. (Optional) If you need to recover the database to the flashback recovery point, perform the following steps:
    1. Log in to DBHOST with oracle user and stop the database:
      srvctl stop database -database wccedgdb
    2. Log in to the database and flashback database to the restore point:
      sqlplus / as sysdbaSQL>
                startup mountSQL>
                FLASHBACK DATABASE TO RESTORE POINT BEFORE_BPM;
          Flashback complete.
    3. Start database with this command:
      SQL> ALTER DATABASE OPEN RESETLOGS;
  8. Start AdminServer:
    ${ORACLE_COMMON_HOME}/common/bin/wlst.sh
    wls:/offline> nmConnect('nodemanager','password','ADMINVHN','5556', 'domain_name','ASERVER_HOME','PLAIN')
    Connecting to Node Manager ...
    Successfully Connected to Node Manager.
    wls:/nm/domain_name > nmStart('AdminServer')
  9. Propagate the domain to the Managed Servers.
    1. Sign in to WCCHOST1 and run the pack command to create the template, as follows:
      cd ${ORACLE_COMMON_HOME}/common/bin
      ./pack.sh -managed=true 
                -domain=ASERVER_HOME \ 
                -template=/full_path/recover_domain.jar \ 
                -template_name=recover_domain_template \
                -log_priority=DEBUG \ 
                -log=/tmp/pack.log
      • Replace ASERVER_HOME with the actual path to the domain directory you created on the shared storage device.
      • Replace /full_path/ with the complete path where you want to create the domain template jar file.
      • recover_domain.jar is an example of the name for the jar file that you are creating.
      • recover_domain_template is an example of the name for the jar file that you are creating.
    2. Run the unpack command in every WCCHOST, as follows:
      cd $ORACLE_COMMON_HOME/common/bin
      
      ./unpack.sh -domain=MSERVER_HOME \
                  -overwrite_domain=true \
                  -template=/full_path/recover_domain.jar
                  -log_priority=DEBUG \
                  -log=/tmp/unpack.log \
                  -app_dir=APPLICATION_HOME \
      
      • Replace MSERVER_HOME with the complete path to the domain home to be created on the local storage disk. This is the location where the copy of the domain will be unpacked.
      • Replace /full_path/ recover_domain.jar with the complete path and file name of the domain template jar file that you created when you ran the pack command to pack up the domain on the shared storage device.
  10. Recover/perform customizations, if needed.
  11. Start the servers and verify the domain.
  12. After checking that everything is correct, you can delete the previous renamed folders:
    1. In WCCHOST1:
      rm –rf   ${ASERVER_HOME}_DELETE
      rm –rf   ${KEYSTORE_HOME}_DELETE
      rm –rf   ${DEPLOY_PLAN_HOME}/${DOMAIN_NAME}_DELETE
      rm –rf   ${APPLICATION_HOME}_DELETE
      rm –rf   ${ORACLE_RUNTIME}/${DOMAIN_NAME}_DELETE
    2. In every WCCHOSTn:
      rm –rf   ${MSERVER_HOME}_DELETE