Implement Oracle Database File System (DBFS) Replication

This implementation consists of copying the mid-tier content to a DBFS folder and relying on Oracle Data Guard to replicate it to the secondary site. The mid-tier contents don’t reside directly on DBFS, because that would make the middle tier dependent on the DBFS infrastructure (database, FUSE libraries, mount points, and so on). The DBFS mount is only an intermediate staging folder to store a copy of the contents.

Any replication to standby implies two steps in this model: from the primary's origin folder to the intermediate DBFS mount, and then, in the secondary site, from the DBFS mount to the standby's destination folder. The intermediate copies are done using rsync. As this is a low-latency and local rsync copy, some of the problems that arise in a remote rsync copy operation are avoided with this model.

Note:

This method is not supported with Oracle Autonomous Database, which doesn’t allow DBFS connections.


replica-mid-tier-dbfs-oracle.zip

The advantages of implementing the mid-tier replica with DBFS are:

  • This method takes advantage of the robustness of the Oracle Data Guard replica.
  • The real mid-tier storage can remain mounted in the secondary nodes. There are no additional steps to attach or mount the storage in the secondary in every switchover or failover operation.

The following are considerations for implementing the mid-tier replica with DBFS:

  • This method requires an Oracle Database with Oracle Data Guard.
  • The mid-tier hosts need the Oracle Database client to mount the DBFS.
  • The use of DBFS for replication has implications from the setup, database storage, and lifecycle perspectives. It requires an installation of the Oracle Database client in the mid-tier hosts, certain database maintenance (to clean, compress, and reduce table storage), and a good understanding of how DBFS mount points behave.
  • The DBFS directories can be mounted only if the database is open. When Oracle Data Guard is not an Active Data Guard, the standby database is in mount state. Hence, to access the DBFS mount in the secondary site, you must convert the database to a snapshot standby. When Active Data Guard is used, the file system can be mounted for reads, and there is no need to transition to a snapshot.
  • It is not recommended to use DBFS as a general-purpose solution to replicate all the artifacts (especially runtime files) to the standby. Using DBFS to replicate the binaries is overkill. However, this approach is suitable to replicate a few artifacts, like the configuration, when other methods like storage replication or rsync do not fit the system's needs.
  • It is the user’s responsibility to create the custom scripts for each environment and run them periodically.
  • It is the user’s responsibility to implement a way to reverse the replica direction.

Set Up Replication for Database File System

This implementation uses the rsync technology and follows the peer-to-peer model. In this model, the copy is done directly between the mid-tier peer hosts. Each node has SSH connectivity to its peer and uses rsync commands over SSH to replicate the primary mid-tier file artifacts.

The following is required to implement mid-tier replica with DBFS:

  • An Oracle Database client installation on the mid-tier hosts that perform the copy, both in the primary and secondary.
  • A DBFS file system created in the database.
  • A DBFS mount in the mid-tier hosts that perform the copies, both in primary and secondary. This mounts the database’s DBFS file system. This file system can be mounted in more than one host, since DBFS is a shareable file system.
  • Scripts that copy the mid-tier file artifacts to the DBFS mount in the primary site.
  • Scripts that copy the mid-tier file artifacts from the DBFS mount to the folders in the secondary site. Depending on the implementation, this method may require SQL*net connectivity between the mid-tier hosts and the remote database for database operations such as role conversions.
  • A way to manage the site-specific information, either excluding that info from the copy or updating it with the appropriate info after the replica.
  • Schedule these scripts to run on an ongoing basis.
  • A mechanism to change the direction of the replica after a switchover or failover.
Example 1: Use DBFS to replicate Oracle WebLogic domain

Note:

The following example applies to Oracle WebLogic systems. You can use it as a reference for copying other folders of the mid-tier system through DBFS, but this particular example uses a script that replicates the WebLogic Administrator’s domain folder to the secondary through DBFS.

This example shows how to replicate the domain folder of the WebLogic Administration host through DBFS. The content located outside the domain folder, as well as content on other hosts, is not included in this example. The domain folder doesn’t reside directly on DBFS; the DBFS mount is only an intermediate staging folder to store a copy of the domain folder.

This example provides a script to perform these actions, which must run periodically in primary and standby sites. This script copies the WebLogic Administration domain folder, skipping some items like tmp, .lck, .state files, and the tnsnames.ora file. The procedure consists of the following:

  • When the script runs on the WebLogic Administration host of the primary site, the script copies the WebLogic domain folder to the DBFS folder.
  • The files copied into the DBFS, as they are stored in the database, are automatically transferred to the standby database through Oracle Data Guard.
  • When the script runs on the WebLogic administration host of the secondary site:
    • The script converts the standby database to a snapshot standby.
    • Then, it mounts the DBFS file system from the standby database.
    • The replicated domain folder is now available in this DBFS folder. The script copies it from the DBFS mount to the real domain folder.
    • Finally, the script converts the standby database to a physical standby again.
  • In case of a role change, the script automatically adapts the execution to the new role. It gathers the actual role of the site by checking the database role.

This script only replicates the domain folder of the WebLogic Administration host. The content under the DOMAIN_HOME/config folder is automatically copied over to all other nodes that are part of the WebLogic domain when the managed servers start. The files outside this folder and the files located on other hosts are not replicated and need to be synchronized separately.

For application deployment operations, use the Upload your files deployment option in the WebLogic Administration Console. This way, the deployed files are placed under the upload directory of the Administration Server ($DOMAIN_HOME/servers/admin_server_name/upload), and the config replica script will sync them to the standby site.

This example provides another script to install the DB Client and to configure a DBFS mount in the mid-tier hosts. The image is an example of an Oracle WebLogic Server for OCI system with DBFS replication.



wls-dbfs-replication-oracle.zip

Perform the following to use the DBFS method to replicate the WebLogic domain:

  1. Allow SQL*net connectivity between the administration hosts and the remote databases.
    Oracle recommends using remote peering with the Dynamic Routing Gateway. The script requires this connectivity to perform database operations such as role conversions. When the script runs in the site with standby role, it converts the standby database to a snapshot standby to mount the DBFS mount.
  2. Download the scripts.
    This document provides scripts to configure the DBFS mount and to automate the replication.
    1. Go to the Oracle MAA repository in GitHub. Refer to the Explore More section in this playbook.
    2. Download all the scripts in the app_dr_common directory.
    3. Download all the scripts in the wls_mp_dr directory.
    4. Copy them to the primary and secondary Administration hosts, to a location that is not replicated.
    5. The scripts make calls to each other. Copy the scripts of both directories and place them in the same folder. Ensure that the oracle user has execution permission.
  3. Configure the DBFS mount in primary and secondary administration hosts.

    Note:

    If you already have a DBFS mount, then you can skip this step. For example, some SOA Marketplace stacks come with a DBFS mount ready-to-use.
    Configure the DBFS mount in the primary and secondary WebLogic Administration hosts. It requires the Database client and some operating system packages on the WebLogic Administration host. Follow these steps in each administration host:
    1. Download the DB client from e-delivery and upload it to the mid-tier host (do NOT install it yet). Search for Oracle Database Client, and select the database client only. Click Continue and then select the installer version (not the gold image).

      Note:

      Be sure that you download the installer version, not the image-based installation. It is recommended to use the latest version.
      For example, download the 982064-01.zip file for Oracle Database Client 19.3.0.0.0 for Linux x86-64, 1.1 GB and upload it to /u01/install/V982064-01.zip on all the mid-tier hosts.

      Do NOT install it yet.

    2. Locate the script dbfs_dr_setup_root.sh under the folder app_dr_common.
      This script performs the tasks to get the DBFS mount ready in the host. It installs the Database client and the required operating system packages, it configures the DBFS user and schema in the database, it mounts the DBFS file system and creates a cron, so the DBFS file system is mounted on host boot.
    3. Execute the script as the root user.
      The syntax is as follows:
      ./dbfs_dr_setup_root.sh  local_db_scan_name db_port  local_PDB_service pdb_sys_password path_to_dbclient_installer
      As input parameters, provide the connection data used to connect to the local database used by the WLS: provide primary PDB connection data when you run it in the primary site administration host, and provide the secondary PDB connection data when you run it in the secondary administration host.

      Note:

      The standby database must be in snapshot standby mode to run this script in the secondary administration host.
      The following example runs it the primary mid-tier administration host. It must be a single line and you must provide primary PDB values and your password:
      ./dbfs_dr_setup_root.sh  drdba-scan.wlsdrvcnlon1ad2.wlsdrvcnlon1.oraclevcn.com 1521 mypdbservice.example.com  mypassword   /u01/install/V982064-01.zip
      The following example runs it the secondary mid-tier administration host. It must be a single line and you must provide primary PDB values and your password:
      ./dbfs_dr_setup_root.sh  drdbb-scan.wlsdrvcnfra1ad2.wlsdrvcnfra1.oraclevcn.com 1521 mypdbservice.example.com  mypassword   /u01/install/V982064-01.zip
      As a result of the execution of this script, you get the following:
      Artifact Value Description
      Database Client home /u01/app/oracle/client The script installs the database client software in the host. It also uses yum to install the required packages.
      Database user

      Name: dbfsuser

      Password: same as sys

      A user in the PDB database for DBFS.
      DBFS tablespace tbsdbfs A tablespace in the PDB for the DBFS mount.
      DBFS folder dbfsdir The DBFS folder in the tablespace.
      A folder in the mid-tier host DOMAIN_HOME/dbfs It contains the wallet that stores the user, password, and other artifacts (tnsnames.ora, sqlnet.ora) required by the database client to mount the DBFS in the host.
      Script in the mid-tier host DOMAIN_HOME/dbfs/dbfsMount.sh Script to mount the DBFS file system in the host. This script is added to the cron on reboot, so it is executed when the machine is rebooted.
      Mount point in the mid-tier host /u02/data/dbfs_root The DBFS file system is mounted in /u02/data/dbfs_root mount point as the folder dbfsdir.

      You can re-run the script, but you'll receive warnings because some things are already created (db user, tablespace, and so on). You can ignore these messages.

    4. Verify that the DBFS mount is present in the mid-tier administration host.
      [root@ prefix-wls-1]# df -h | grep dbfs
      dbfs-@PDB1:/     32G  248K   32G   1% /u02/data/dbfs_root
      [root@ prefix-wls-1]# ls /u02/data/dbfs_root
      dbfsdir
      This DBFS file system is used as an assistance file system to store a copy of the primary site’s domain configuration.
  4. Prepare the replica script.
    This document provides a reference script for this implementation, the config_replica.sh script.
    1. In the primary Oracle WebLogic Administration host, open the config_replica.sh script. Edit the customizable parameters sections.
      Make sure you provide the appropriate variables for the primary. In the DR_METHOD property, use DBFS.
    2. Do the same in the secondary Oracle WebLogic Administration host. Make sure you provide the appropriate variables for the secondary.
  5. Run the replication script.
    1. As the oracle user, run the config_replica.sh script in the primary Oracle WebLogic Administration host.
      The script will verify the current site role and copy the domain configuration from the primary Oracle WebLogic Server domain to the DBFS mount.
    2. Monitor the execution and watch for any errors.
    3. Once it completes, run the config_replica.sh script in the secondary site’s Oracle WebLogic Administration Server host.
      Ensure that you use the appropriate values in the customized parameters. The script will verify the database role. Since it is the standby, it will copy the domain configuration from the secondary staging file system to the secondary Oracle WebLogic Server domain.

    Note:

    This script must always run both in primary and standby to perform a complete replication: first on the primary to copy the domain to DBFS folder, and then on the standby to copy the domain from the DBFS to the domain folder. The frequency depends on how often the configuration changes are performed on the Oracle WebLogic Server domain.

Validate Replication for Database File System

In a switchover or failover operation, the replicated information must be available and usable in the standby site before the processes are started. This is also required when you validate the secondary system (by opening the standby database in snapshot mode).

In this implementation, the storage is always available in the standby; you don’t need to attach or mount any volume. The only action you need is to ensure that it contains the latest version of the contents.

Perform the following to use the replicated contents in standby:

  1. Run a replication.
    Run the replica scripts to make the latest content available in the secondary system.
  2. Disable scheduled replications.
    Once the last replica finishes, disable any replica script. Otherwise, it can interfere with the switchover, failover, or validation procedure. You will enable it again after the operation, in the appropriate direction.

Perform Ongoing Replication for Database File System

Run the replication script periodically to keep the secondary domain in sync with the primary.

Follow these recommendations when using rsync from the mid-tier hosts:
  • Use the OS crontab or another scheduling tool to schedule replication. It must allow the scripts to complete the replication. Otherwise, the subsequent jobs may overlap.
  • Keep the mid-tier processes stopped in the standby site. If the servers are up in the standby site while the changes are replicated, the changes will take effect the next time they are started. Start them only when validating the standby site or during the switchover or failover procedure.
  • Maintain the information that is specific to each site and keep it up-to-date. For example, skip the tnsnames.ora from the copy, so each system has its connectivity details. If you perform a change in the tnsnames.ora in primary (for example, adding a new alias), manually update the tnsnames.ora in secondary accordingly.
  • After a switchover or failover, reverse the replica direction. This depends on the specific implementation. The scripts can use a dynamic check to identify who is the active site, or you can perform a manual change after a switchover or failover (for example, disabling and enabling the appropriate scripts). In the example provided, the config_replica.sh script automatically adapts the execution to the actual role of the site by checking the local database role.