2 Perform Production-to-Test Data Movement

This chapter contains the start-to-finish steps for transferring data from a source Oracle Fusion Applications instance onto an existing destination Fusion Applications instance. This chapter contains the following topics:

2.1 About Production-to-Test Data Movement

"Production-to-test" is the movement of application data from a source to a target Fusion Applications installation. Although a common use case is the refreshing of a test database with production data, the same tools could be used to move data between any two environments (production, staging, testing, etc.). Throughout this document, "production" is assumed to be source, and "test" is assumed to be the target.

There are two phases in moving data in a Fusion Applications installation: 1) moving the Identity Management Identity and Policy Store data, and 2) moving data from the Fusion Applications transaction database(s). At a high level, the following are moved:

  • Identity Management Policy Store data (application and system policies, but not credentials and keys)

  • Identity Management Identity Store data (not including AppID and user passwords)

  • Fusion Applications transaction data and the crawl index stored in SES

  • File attachments stored in UCM (such as orders, agreements)

  • ADF Customizations (such as Flex Fields), SOA and ESS customizations stored in MDS

  • Business Intelligence (BI) Web Catalog and RPD

  • ODI repository

  • WebCenter contents

Production-to-test movement replaces most of the target database with production data; a small category of data on the test/target system is preserved, as required by the system. When the content is moved, the target environment is reconfigured and rewired. All long-running processes on the target are stopped and purged, in order to prevent the non-production system from sending emails and alerts to real users, as if it were the production system.

2.1.1 Terminology

Common terminology used in production-to-test data movement includes:

Source Environment - In data movement, the source environment is a fully provisioned Fusion Applications environment with data that will be replicated to another existing environment. The source environment may be used for production, thus the term "production-to-test."

Target Environment - The target environment (which may be used for testing) is a matching Fusion Applications instance to the source. It will have its transaction data overwritten by the source data.

Content Movement - A general term that refers to the task of moving Fusion Applications components and/or data from one environment to another environment.

Abstract Host Name - An abstract host name is an alias given to represent a physical node. It has a one-to-one relationship with a virtual host name. If your environment was installed before the release of cloning and done without the use of abstract host names, the virtual host names in your source environment will become abstract names in the destination environment. If your source environment did not make use of virtual host names, then physical host names will be used.

2.2 Roadmap: What Does Production to Test Data Movement Entail?

Production-to-test data movement requires the following high-level steps:

2.3 Prerequisites and Assumptions

The following assumptions are made for production-to-test data movement:

  • Source and target systems must be identical in terms of product version, initial patches, deployment topology, and configurations. The same applies to their respective databases. Note: There are required patches for production-to-test that need to be applied only to the target system. At that point, the patching for the two systems will no longer be identical.

  • Both systems were set up following the same set of instructions.

    Note:

    The procedures in this book are NOT designed for Oracle Fusion Applications systems that were installed using OVM templates. If your source system was installed in this way, contact Oracle Support for the correct production-to-test documentation and procedures.

    If you used virtualization technology, such as Oracle Virtual Machines, to host an operating system, but performed full standard provisioning into that virtualization layer, then the procedures in this book CAN be used. Both source and target systems must match.

  • The OS version and configurations are identical in both environments.

  • Internal host names are identical in both environments.

  • The directory paths and structures are identical in both environments.

  • Both source and target environments are available for access over SSH.

  • The host and port of both OID stores are accessible for data movement.

  • The name values for "IDM_JPSROOT" and "FS_JPSROOT" values must be identical between source and target systems.

2.3.1 System Requirements

Versions: Both production and test installations must be on matching versions of Oracle Fusion Applications. Check the title page of this guide for the correct software version; to use this guide, the software and guide versions must match.

The starting versions of the two environments must be identical in terms of patching. The additional patches listed for production to test can be applied to the target system only.

2.3.1.1 Required Patches for Production-to-Test on Target Environment

There are patches specific to production-to-test that must be installed on the Identity Management and Fusion Applications servers. Check the Release Notes for the current list of patch numbers to be installed.

2.3.2 Directory Requirements for APPLTOP (Base), Product and Config Directories

The production-to-test tools assume that both product binary and instance (or config) directories are relatively based on APPLTOP. If that is not the case then, symbolic links must be created. For example: if APPLTOP=/u01/oracle, then create symbolic links as:

  • dbclient -> products/dbclient

  • instance -> config

  • fusionapps -> products/fusionapps

2.3.3 Obtaining and Installing the Production-to-Test Tools

There are two steps to installing the production-to-test software: downloading the P2T .zip patch from Oracle Support and extracting it on the Oracle Fusion Applications system.

  1. Download P2T Patch:

     Go to My Oracle Support (MOS) and download the P2T Patch, for example p19816982_111800_Linux-x86-64.zip. It is recommended to download and install on the folder where Fusion Applications was installed, such as /u01/{app}. But any location should work if it has direct access to the installation point of FA.
  2. Unzip the File:

    If Fusion Applications is installed on multiple servers, you can install the production-to-test kit in shared storage with identical mapping from all the servers in the Fusion Applications environment. (This includes the Identity Management environment).
  3. Navigate to the downloaded .zip file and extract it using the unzip command, for example: unzip $??/p19816982_111800_Linux-x86-64.zip. This will create the P2T home folder, with the bin and Utils subdirectories, containing all the logs, output information, and binary code needed to run the production-to-test processes.

Note:

If the $P2T_HOME is not shared among all hosts, then repeat the process to install FAP2T_11.1.10.0.0.zip on the FA Source Host and FA Target Host.

2.3.4 Prerequisites for Executing P2T

Before executing P2T, ensure you meet the following prerequisites:

  • Administrator servers from all domain and Fusion Applications and IDM databases must be online.

  • The System Administrators responsible for Oracle Fusion Applications must know the values of the properties in the following sections.

2.4 Discovery

The discovery phase may be the most important part of the data movement process. Here you determine all the relevant details of your source and destination environments, and record them. Note that the details required for production-to-test data movement are different than those for Cloning.

Refer to the following section to help validate the response files:

2.4.1 Generating the P2T Response Files

The P2T response file (p2t.rsp) contains answers to P2T execution questions. Each answer is stored as a value for a variable identified in the response file. To generate this response file, perform the following steps:
  1. Set the environment variables as follows:
    • JAVA_HOME on FA Nodes:
      export JAVA_HOME=<APPL_TOP>/fusionapps/jdk
      
      For example:
      /u01/app/fa/fusionapps/jdk
      
    • JAVA_HOME on IDM Nodes:
      export JAVA_HOME=<IDM_BASE>/products/app/jdk
      
      For example:
      /u01/app/idm/products/app/jdk
      
    • P2T_HOME on FA and IDM Nodes:
      export P2T_HOME=<P2T_HOME location>
      
  2. Move into the /bin directory:
    cd $P2T_HOME/bin 
    
  3. Ensure that the FA and IDM environments are up and running before performing Steps 4 and 5.
  4. Generate the response (rsp) file for the source server by running the following command on the FA node only once:
    $P2T_HOME/bin/p2tcli.sh discover fa source
    

    This step will prompt you for your password if the security option was enabled in the Config file located at Utils/app/discover/config/DiscoverConfig.xml.

  5. Generate the response (rsp) file for the target server by running the following command on the FA node only once:
    $P2T_HOME/bin/p2tcli.sh discover fa target
    

    After you generate both rsp files, they are located at $P2T_HOME/Utils.

  6. Manually generate the final response file as follows:
    1. Merge the p2t.source.rsp and p2t.target.rsp files generated in Steps 3 and 4 into the p2t.rsp file.
    2. Copy the final p2t.rsp to $P2T_HOME/bin/.

      Note:

      The response file template for the execution of P2T can now be found at $P2T_HOME/bin/p2t.rsp.

2.5 Back Up the Source and Target FA and IDM Databases

Perform a backup of the source and target databases, using whatever method you prefer: RMAN backup, file system copy, storage replication, VM snapshot, etc. To keep source IDM and FA data synchronized, you must ensure that the system is suspended with no incoming transactions, and all transactions are completed, aborted or suspended. If possible, it is preferred to run a cold backup to completely ensure synchronization.

2.6 Export Application Data from Source and Target

In this section, you export the source application data and also export selected target data that needs to be preserved and reused. This following topics are discussed:

2.6.1 Generate Encrypted Passwords for P2T

To generate the encrypted passwords for P2T on the source server, run the following command only once:
$P2T_HOME/bin/p2tcli.sh generatePasswords

2.6.2 Export from the IDM and FA Source Systems

In production-to-test for Identity Management, the application users and roles are migrated from source to target, but the passwords are not. Therefore, the system administrator must set new passwords on the target system for each newly migrated user who did not already exist on the target. Using the Generated P2T Response File tab in the Workbook, you also modify the p2t.rsp file, located in $P2T_HOME/bin/p2t.rsp. This file will be used throughout the production-to-test process on both Identity Management and Fusion Applications. While exporting, ensure that source transactions are suspended or have minimal activity.

To pack the IDM source files, run the following scripts:
$P2T_HOME/bin/p2tcli.sh packData preverify idm
$P2T_HOME/bin/p2tcli.sh packData run idm
To pack the FA source files, run the following scripts:
$P2T_HOME/bin/p2tcli.sh packData preverify fa
$P2T_HOME/bin/p2tcli.sh packData run fa

2.6.3 Copy Exported Source IDM and FA Files to Target Server

Copy over the files that were previously packed as follows:
  1. Copy the folder $P2T_HOME/Utils/p2tCore/utilhome/P2TSourcePackedFiles to the target server. Use the exactly same path ($P2T_HOME/Utils/p2tCore/utilhome/P2TSourcePackedFiles).
  2. Ensure you have read/write/update access to the folder.

2.6.4 Export from the FA Target System

This step preserves some of the data on the target system which will be automatically re-imported when the production data is migrated. To export from the FA target system, perform the following steps:

  1. Ensure that the folder $P2T_HOME/Utils/p2tcore/utilhome/P2TSourcePackedFiles  is available on the target server before executing the target commands.
  2. Run the following commands for Core Data:
    $P2T_HOME/bin/p2tcli.sh generateMoveData preverify fa
    $P2T_HOME/bin/p2tcli.sh generateMoveData run fa
    
  3. Run the following commands for BI Data:
    $P2T_HOME/bin/p2tcli.sh generateMoveDataBI preverify fa
    $P2T_HOME/bin/p2tcli.sh  generateMoveDataBI run fa
    
  4. Export the Security Store from the target IDM server by running the following commands:
    $P2T_HOME/bin/p2tcli.sh migrateSecurityStore preverify idm
    $P2T_HOME/bin/p2tcli.sh migrateSecurityStore runsource idm
    
Once you complete the steps above, the copied folder contains the following information:
  • diskspacecheck.txt
  • idmlcm_data.zip
  • obirpd.tgz
  • opss_cloning_work.zip
  • preverifyReportDir
  • fadbhost.mycompany.com.preverifyreport.txt
  • vault.tgz
  • wallets
  • birpdcwallet.sso
  • webcatalog.tgz
  • weblayout.tgz

2.7 Import Application Data to the IDM and FA Target Systems

Production-to-test movement for the transaction data includes the following steps. For each command, run preverify and correct any errors until preverify passes, then execute run.

2.7.1 Import IDM Data into Target

To import IDM data into the target server, run the following commands on the IDM server:
$P2T_HOME/bin/p2tcli.sh migrateOid preverify idm
$P2T_HOME/bin/p2tcli.sh migrateOid run idm
$P2T_HOME/bin/p2tcli.sh migrateOid postvalidate idm

2.7.2 Duplicate FA Database from Source to Target

The Fusion Applications (FA) Database duplication is done in the method your enterprise uses, for example, RMAN, Expdp/Impdp, Cold Backup/Restore, and the like.

Ensure you meet the following requirements when replacing the target database with the source database:

  • Before duplicating, shut down the target Fusion Applications Web tier and application tier, as well as the Identity Management Web tier and application tier, and ensure that all in-flight transactions have been completed.
  • The topology and operating systems must be identical between source and destination.
  • When replacing the target database from the source, the schema passwords come over from the source and you must reset them with the original target passwords.

2.7.3 Import Security Store

Once the Duplication of FA database is completed, import the Security Store to the target IDM server by running the following commands:
$P2T_HOME/bin/p2tcli.sh migratesecuritystore rundestination idm
$P2T_HOME/bin/p2tcli.sh migratesecuritystore postvalidate idm

2.7.4 Import FA Application Data

This step imports the FA application data from Copy Exported IDM and FA Files to Target Server in to the target (test) database. All long-running processes will be stopped and purged, to prevent the non-production system from sending emails or notifications to real users as if it were a production system.

Move packed data and clean up In-Flight transactions for FA as follows:
  1. Ensure the servers are down.

  2. While the servers are down, run the following commands:
    $P2T_HOME/bin/p2tcli.sh  applyMoveDataOffline preverify fa
    $P2T_HOME/bin/p2tcli.sh  applyMoveDataOffline run fa
    
  3. Bring the servers back up.

  4. While the servers are up, run the following commands:
    $P2T_HOME/bin/p2tcli.sh applyMoveDataOnline preverify fa
    $P2T_HOME/bin/p2tcli.sh applyMoveDataOnline run fa
    
Move packed data and clean up In-Flight transactions for BI as follows:
  1. Ensure the servers are down.

  2. While the servers are down, run the following commands:
    $P2T_HOME/bin/p2tcli.sh applyMoveDataBIOffline preverify fa
    $P2T_HOME/bin/p2tcli.sh applyMoveDataBIOffline run fa
    
  3. Bring the servers back up.

  4. While the servers are up, run the following commands:
    $P2T_HOME/bin/p2tcli.sh applyMoveDataBIOnline preverify fa
    $P2T_HOME/bin/p2tcli.sh applyMoveDataBIOnline run fa
    
  5. Bring the servers are down again.

  6. While the servers are down, run the following offline commands:
    $P2T_HOME/bin/p2tcli.sh applyMoveDataBIViaBIFacadeOffline preverify fa
    $P2T_HOME/bin/p2tcli.sh applyMoveDataBIViaBIFacadeOffline run fa
    
  7. Bring the servers back up again.

  8. While the servers are up, run the following online commands:
    $P2T_HOME/bin/p2tcli.sh applyMoveDataBIViaBIFacadeOnline preverify fa
    $P2T_HOME/bin/p2tcli.sh applyMoveDataBIViaBIFacadeOnline run fa
    

2.7.4.1 Validate

After completing the Fusion Applications production-to-test steps, restart the Fusion Applications stack again. All domains and managed servers must restart successfully. The system is ready for functional testing.