Skip Headers
Oracle® Fusion Applications Cloning and Content Movement Administrator's Guide
11g Release 5 Refresh 8 (11.1.5)

Part Number E38322-12
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

3 Perform Production-to-Test Data Movement

This chapter contains the following high-level sections:

3.1 Introduction

This chapter contains the start-to-finish steps for transferring data from a source Oracle Fusion Applications instance onto an existing destination Fusion Applications instance. This is separate from the cloning procedures described in Part I.

3.1.1 What is Production-to-Test Data Movement?

"Production-to-test" is the movement of application data from a source to a target Fusion Applications installation. Although a common use case is the refreshing of a test database with production data, the same tools could be used to move data between any two environments (production, staging, testing, etc.). In this chapter, "production" is assumed to be source, and "test" is assumed to be the target.

3.1.1.1 What is Moved in Production to Test?

There are two phases in moving data in a Fusion Applications installation: 1) moving the Identity Management Identity and Policy Store data, and 2) moving data from the Fusion Applications transaction database(s). At a high level, the following are moved:

  • Identity Management Policy Store data (application and system policies, but not credentials and keys)

  • Identity Management Identity Store data (not including AppID and user passwords)

  • Fusion Applications transaction data and the crawl index stored in SES

  • File attachments stored in UCM (such as orders, agreements)

  • ADF Customizations (such as Flex Fields), SOA and ESS customizations stored in MDS

  • Business Intelligence (BI) Web Catalog and RPD

  • ODI repository

  • WebCenter contents

Production-to-test movement replaces most of the target database with production data; a small category of data on the test/target system is preserved, as required by the system. When the content is moved, the target environment is reconfigured and rewired. All long-running processes on the target are stopped and purged, in order to prevent the non-production system from sending emails and alerts to real users, as if it were the production system.

3.1.1.2 Terminology

Common terminology used in production-to-test data movement includes:

Source Environment - In data movement, the source environment is a fully provisioned Fusion Applications environment with data that will be replicated to another existing environment. The source environment may be used for production, thus the term "production-to-test."

Target Environment - The target environment (which may be used for testing) is a matching Fusion Applications instance to the source. It will have its transaction data overwritten by the source data.

Content Movement - A general term that refers to the task of moving Fusion Applications components and/or data from one environment to another environment.

Abstract Host Name - An abstract host name is an alias given to represent a physical node. It has a one-to-one relationship with a virtual host name. If your environment was installed before the release of cloning and done without the use of abstract host names, the virtual host names in your source environment will become abstract names in the destination environment. If your source environment did not make use of virtual host names, then physical host names will be used.

3.1.2 Roadmap: What Does Production to Test Data Movement Entail?

Production-to-test data movement requires the following steps:

  • Fulfill Prerequisites and download the production-to-test tools. See Section 3.2.

  • Complete Discovery: Fill in the in-depth notation of the source and destination topology and configuration details, with entries typed into the P2T tabs of the Discovery Workbook for Cloning and Content Movement, provided. See Section 3.3.

  • Move Identity Management data using a five-step process. See Section 3.4. This step must be completed before moving the Fusion Applications data.

  • Move Fusion Applications data from production to test, while also exporting and re-imported selected test data that must be preserved. See Section 3.5.

3.2 Prerequisites

The following assumptions are made for production-to-test data movement:

3.2.1 System Requirements

Versions: Both production and test installations must be on matching versions of
Oracle Fusion Applications. Check the title page of this guide for the correct software version; to use this guide, the software and guide versions must match.

The starting versions of the two environments must be identical in terms of patching. The additional patches listed for production to test can be applied to the target system only.

Operating system: The source and destination environments must be installed on Linux only (versions certified for Oracle Fusion Applications). (This requirement excludes the database, which can run on all supported OSs.)

3.2.1.1 Required Patches for Production-to-Test on Target Environment

There are patches specific to production-to-test that must be installed on the Identity Management and Fusion Applications servers. Check the Release Notes for the current list of patch numbers to be installed.

3.2.2 Obtaining the Production-to-Test Tools

If Fusion Applications is installed on multiple servers, you can install the production-to-test kit in shared storage with identical mapping from all the servers of the Fusion Applications environment. (This includes the Identity Management environment).

Unzip famigratep2t.zip to $ORACLE_BASE.

For example:

# Create ORACLE_BASE
mkdir  /u01/oracle/p2t
cd /u01/oracle/p2t
unzip famigratep2t.zip

In this example, the P2T working directory ($WORKING_DIR) is /u01/oracle/p2t/famigratep2t.

3.3 Fill Out the Discovery Workbook

The discovery phase may be the most important part of the data movement process. Here you determine all the relevant details of your source and destination environments, and record them. Note that the details required for production-to-test data movement are different than those for Cloning, and have their own tabs in the Workbook.

3.3.1 Using the Discovery Workbook

The Oracle Fusion Applications Discovery Workbook for Cloning and Content Movement is a required companion document to this User Guide. It is used to help you research and annotate every aspect of your source and destination Fusion Applications environments. Fill in the P2T tabs in the Workbook; you will then copy/paste the entries to complete the p2t.rsp response file appropriately.

3.3.1.1 Where to Find provisioning.rsp and provisioning.plan

The best resource for many of the Workbook entries is the provisioning.rsp file. For some data, it is also necessary to refer to provisioning.plan.

Both files may be located in the same directory:
(APPLICATIONS_BASE/provisioning/plan/).
If the .rsp file is not in the /plan directory, search for provisioning.setup.core.provisionplan.install within provisioning.plan, to see where the .rsp file is located.

3.3.2 Prepare for the Discovery Phase

The Workbook gives some shorthand tips on where to find things or how to enter them, but this section of the User Guide provides much more guidance.

To begin, open the Discovery Workbook and proceed through the three tabs of data you are asked to collect. They are organized as follows:

The last tab is special; it automatically collates the data from the rest of the tables and organizes them for ease of use in the p2t.rsp response file. It is:

3.3.3 P2T Identity Management

There are three tables in the P2T Identity Management tab. The following sections give tips on finding the correct values for each row in the tables.

3.3.3.1 IDM Database Information (Source and Target)

The IDM database administrator should know the host names, service names, port numbers, and schema names for the OID and OIM on the target and source environments. Enter in the appropriate tabs. When a field is marked N/A, this entry will not be needed by the P2T script, and can be omitted.

3.3.3.2 IDM Midtier Information (Production/Source)

  • OID Hostname: Enter the physical host name for the server where the OID resides on both source and target.

  • OID Port: If you need to locate this information, perform a file system search for the ports.properties file: $OID_INSTANCE/config/OPMN/opmn/ports.prop. Search for /oid1/oid1_nonSSLPort= to find the number.

  • OVD Port: If you need to locate this information, perform a file system search for the listeners xml file: $OVD_INSTANCE/config/OVD/ovd1/listeners.os_xml. Search for <ldap id="LDAP Endpoint" version="1">; the port number is listed immediately below.

  • JPS Config Directory:

    This section proceeds in several parts.

    TARGET: To fill in the Target value, find fmwconfig on IDM domain home. ($IDM_DOMAIN_HOME/config/fmwconfig). Enter this value in the Target column.

    SOURCE: This entry is unusual and requires several steps. The entry in the Source column is a pointer to a temporary directory you will create on the Target. Into this directory, you will copy three items from the Source fmwconfig, as follows:

    1. Create a temporary directory on the target, such as /tmp/config/fmwconfig>.
    Enter this value in the Source column.

    2. Search the source system for IDM domain home:
    $IDM_DOMAIN_HOME/config/fmwconfig. In this directory are 1)the bootstrap directory, 2) jps-config.xml, and 3) jps-config-jse.xml.

    3. Copy these three items into the directory that was created on the target in step 1, such as /temp/config/fmwconf.

    Note: When you have completed the whole P2T process, delete the /temp directory from the target environment.

  • IDM Admin Server Path/ IDM_DOMAIN_HOME: The path to this domain home can be found from Fusion Middleware Control, if needed.

3.3.3.3 IDM Midtier Information (Test/Target)

  • IDM Java Home Directory: This is typically located in the OIM Middleware home.

  • OIM Middleware Home: Log in to the FMW Control for IDM and go through the Topology for each component. The Oracle Homes and Instance Homes for each IDM component are listed and can be entered into the Workbook.

    Search FMW control in the same way for OIM Oracle Home, OID Oracle Home, OIM Managed Server Hostname, OIM Managed Server Port, and OIM Managed Server Name, as well.

  • OID Instance Directory: Enter the path where OID instance resides.

  • OID Instance Name: If you need to find this value, open the $OID_INSTANCE/config/OPMN/opmn/opmn.xml file and search for ias-instance id="oid1" name="oid1"

  • OID Component Name: If you need to find this value, open the
    $OID_INSTANCE/config/OPMN/opmn/opmn.xml file and search for <ias-component id="oid1" type="OID">.
    (Be sure that the type is equal to OID; there are other ias-component-ids available.)

  • OVD Hostname:If you need to find this value, return to the listener file. (Perform a file system search for the listeners xml file:
    $OVD_INSTANCE/config/OVD/ovd1/listeners.os_xml.) Search for
    <ldap id="LDAP Endpoint" version="1">; the hostname is listed immediately below:
    <port>6501</port>
    <host>LDAPHOST1.mycompany.com</host>

  • IDM super user (LDAP): Look for cn=weblogic_idm in the Users tree node of ODSM.

  • IDStore Admin User Name: cn=oracladmin. This value is almost always used; if your enterprise changed this value, enter the change. Otherwise, just remove the brackets from the sample value.

  • OAM Admin User Name: This is the user name used to log into the OAM Console, normally oamadmin. To check, look for cn=oamadmin in the Users tree node of ODSM. This user should be part of the OAMAdministrators group.

  • OIM Admin User Name: This is the user name used to log into the OIM Console, normally xelsysadm. To check, look for cn=xelsysadmin the Users tree node of ODSM.

To find all the following JPS root values, log on to the ODSM.

  • FA JPSROOT: Usually, the provisioning process assigns the name
    fa_jpsroot, or jpsroot_fa, or FAPolicies (depending on the version you've installed), but it could be given a unique name by your company. To check this value: in ODSM, select the Data Browser tab, and check the listed values.

    Description of fa_jpsroot.gif follows
    Description of the illustration fa_jpsroot.gif

  • FA Domain under JPSRoot: When you've located the FA JPSROOT in ODSM, expand the tree to find the FA Domain.

  • IDM_JPSROOT: Usually assigned idm_jpsroot or jpsroot_idm. Check the same Data Browser tab in ODSM to find the correct value for your installation.

  • IDM Domain under JPSROOT: Expand the listing for idm_jpsroot in the ODSM to see the IDM Domain.

  • Base DN: Look in the ODSM Data Browser data tree and expand the dc= to find the full value. The Base DN is everything above the cn=Users.

  • Replication ID: In ODSM Data Browser data tree, expand cn=replication configuration until you see orclreplicaid=. Highlight this entry to see the full details. The Replication ID is the Distinguished Name at the top of the page (everything after orclreplicaid=).

    The Replication Hostname is listed at the bottom of the same page in orclReplicaURI.

  • TEST_RESET_PWD: Set value to FALSE. False means that the user passwords from source will not be carried over to the target system. True means that they will, but this is not recommended.

3.3.4 P2T Fusion Applications

The five tables in this tab of the Discovery Workbook include:

3.3.4.1 FA DB

The database administrator should be able to enter correct values for the source target environments in this table. Note that if no data pump directories exist, they must be created on the database server of the target system.

3.3.4.2 FA Common Information

  • FA Base Directory (APPLTOP): If you need to find this value, search provisioning.rsp for INSTALL_ APPHOME_DIR.

  • FA Java Home: The jdk is typically installed under the FA Base Directory.

  • Common Domain Home Directory: This is the path to the domain directory, in the format <FA Instance home>/domains/<abstract host name of the topology component>/<Domain name>.

    For example, if the instance home is /u01/app/fa/instance, and the abstract hostname for COMMON Admin is fusionapps.mycompany.com, then the Admin Server path for Common Domain would be: /u01/app/fa/instance/domains/fusionapps.mycompany.com/CommonDomain

  • FA Super User Name: If you need to find this value, search provisioning.rsp for IDENTITY_SUPERUSER.

3.3.4.3 FA Test/Target Information

  • P2T Working Directory: Enter the directory you created when extracting the production-to-test tools. See Section 3.2.2 for an example of the P2T Working Directory.

  • T3 URL Entries: For all the T3 URL entries, search the provisioning.rsp file for the #Domain Topology. This will list each host name and port; concatenate them to create the full entry, using the format: t3://<hostname>:<port>.
    NOTE: If you do not have all products installed, and therefore domain does not exist, use NONE as a value. Do NOT delete or leave empty.

    This applies to Common Domain T3 URL, CRM Domain T3 URL, HCM Domain T3 URL, SCM Domain T3 URL, FIN Domain T3 URL, Project Domain T3 URL, Procurement Domain T3 URL, and IC Domain T3 URL.

  • SES and ESS Entries: Log on to the Common Domain Admin Console. Go to Servers to find the SES and ESS information. This applies to Common Domain SES (Secure Search Server) Hostname, Common Domain SES (Secure Search Server) Port Number, and Common Domain ESS Server Name.

3.3.4.4 FA BI Test/Target Information

  • BI Machine OS User Name: this is the user that installed the Business Intelligence domain on the BI server.

  • BI Domain Home Directory: This is the path to the domain directory, in the format <FA Instance home>/domains/<abstract host name of the topology component>/<Domain name>.

  • BI Admin Server Host Name and BI Admin Server Port: Search the provisioning.rsp file for the #Domain Topology. This will list each host name and port.

  • Broker Hostname and Broker Port: information: For all entries, access /u01/oracle/fa/config/CommonDomain_webtier/config/OHS/ohs1/moduleconf

    For Financial Broker information, open the file FusionVirtualHost_fin.conf. Search for#Internal virtual host for fin<VirtualHost fininternal.mycompany.com:20603 > This gives you the Financial Broker Port and Hostname.

    For CRM Broker information, open the file FusionVirtualHost_crm.conf . Search for #Internal virtual host for crm <VirtualHost crminternal.mycompany.com:20615 >.

    For HCM Broker information, open the file FusionVirtualHost_hcm.conf. Search for #Internal virtual host for hcm <VirtualHost hcminternal.mycompany.com:20619 >.

  • FA DB Host Names and Ports: For single-instance environments, the database administrator can fill this in. In the case of a RAC installation, you must enter all instances of the database in an escape semi-colon-separated list.
    For example:1521\;1522.

3.3.4.5 FA BI (Prod/Source) Information

  • BI RPD Directory: Use the format
    <FA Instance home>/BIInstance/bifoundation/OracleBIServerComponent/ coreapplication_obis1/repository>

  • BI Domain Home Directory: This is the path to the domain directory, in the format <FA Instance home>/domains/<abstract host name of the topology component>/<Domain name>.

  • BI Admin Server Hostname and Port: Search the provisioning.rsp file for the #Domain Topology. This will list each host name and port.

  • Broker Hostname and Broker Port information: For all entries, access /u01/oracle/fa/config/CommonDomain_webtier/config/OHS/ohs1/moduleconf

    For Financial Broker information, open the file FusionVirtualHost_fin.conf. Search for#Internal virtual host for fin<VirtualHost fininternal.mycompany.com:20603 > This gives you the Financial Broker Port and Hostname.

    For CRM Broker information, open the file FusionVirtualHost_crm.conf . Search for #Internal virtual host for crm <VirtualHost crminternal.mycompany.com:20615 >.

    For HCM Broker information, open the file FusionVirtualHost_hcm.conf. Search for #Internal virtual host for hcm <VirtualHost hcminternal.mycompany.com:20619 >.

  • UCM Weblayout Directory: Use the format <FA Instance home>/<abstract host name>/CommonDomain/ucm/cs/weblayout/>.

  • UCM Vault Directory: Use the format <FA Instance home>/<abstract host name>/CommonDomain/ucm/cs/vault/>

  • BI Webcat Directory: Use the format <FA Instance home>/<abstract host name>/CommonDomain/ucm/cs/catalog/>.

  • Is Informatica installed? If it is, then set the value to TRUE.

3.3.5 P2T Passwords

This tab is informational only; do not enter values in the fields! These are the passwords that will be required during production-to-test movements.

3.3.6 Generated P2T RSP Entries

This tab organizes all your entries and presents them so they are easy to use. The Generated RSP Entries tab collates the data entered in all the other tabs and tables, and generates the entries and values as they should be entered in the p2t.rsp file. The file is located in $working_dir/utilhome/bin/p2t.rsp. When discovery is finished, transfer the generated RSP entries to the .rsp file as follows:

  1. Locate the p2t.rsp sample file in the package of production-to-test materials.

  2. Replace the contents with the Generated P2T RSP Entries by copying all and pasting into the sample file.

  3. Search for properties ending in _PASSWORD and _PWD. These are password entries for the target environment and are not auto-filled, since they are not entered into the Workbook for security reasons. Manually enter each password value for the target environment into the p2t.rsp file.

  4. Save the file; it will be used when the production-to-test scripts are run.

3.4 Reconcile Identity Management Data (IDM)

In production-to-test for Identity Management, the application users and roles are migrated from source to target, but the passwords are not. Therefore, the system administrator would need to set new passwords on the target system for each newly migrated user who did not already exist on the target.

To prepare Identity Management data is done in the following steps:

Note: If OID and OIM are on same host, all the steps can be executed from that server. If they are on different hosts, then the OID-related steps must be done on the OID server, and the OIM and OPSS (Policy and Identity Store) steps on the OIM host.

Before beginning any of the steps, it is necessary to complete the Discovery Workbook (Section 3.3). Using the Generated P2T Response File tab in the Workbook, you also modify the p2t.rsp file, located in $WORKDIR/utilhome/bin directory. This file will be used throughout the production-to-test process on both Identity Management and Fusion Applications.

3.4.1 Validate on the Target OID Server

In this step, the production-to-test tool connects to the source and target OIDs, based on the entries in p2t.rsp, to ensure that the host, port and credential information are all correct.

  1. Shut down the Fusion Applications stack. The Fusion Applications database must be up.

  2. Run the following step on the OID server:

    cd $WORKDIR/utilhome/bin
    ./bmIDM.sh validate
    
  3. Fix any errors until validation is complete.

3.4.2 Disable Reconciliation Jobs on Target OIM

This step disables reconciliation jobs on the target OIM, before data movement begins. Sample reconciliation jobs would include: "LDAP User Create and Update Reconciliation", "LDAP Role Create and Update Reconciliation", "LDAP Role Membership Reconciliation", "LDAP User Delete Reconciliation", "LDAP Role Delete Reconciliation", etc.

Please execute the following steps on the target OIM:

cd $WORKDIR/utilhome/bin
./bmIDM.sh oimpre

3.4.3 Reconcile OID Directories

This step compares the source and target OID directories and reconciles any discrepancies by merging any differing attributes from the source to the target.

Execute the following steps on the target OID server:

cd $WORKDIR/utilhome/bin
./bmIDM.sh oid

3.4.4 Re-enable Reconciliation Jobs on Target OIM

When the OID directories have been reconciled, it is possible to restart the reconciliation jobs. Execute the following steps on the target OIM server:

cd $WORKDIR/utilhome/bin
./bmIDM.sh oimpost

3.4.5 Move Identity and Policy Store Data

This step migrates application-specific policies and system policies from the source to the target.

cd $WORKDIR/utilhome/bin
./bmIDM.sh opss

3.4.6 Run Validation Script

Execute the following step to check that source and target system are valid. You should also check the p2t_validate.log file on the working directory.

cd $WORKDIR/utilhome/bin
./bmIDM.sh validate

When validation is complete, then:

  1. Restart the target Identity Management stack.

  2. Start the Fusion Applications stack that was shut down in section Section 3.4.1. All the domains and managed servers must restart successfully. Access critical admin console and application URLs to log in successfully, but do not do any functional testing or transactions until the next phase is complete Section 3.5, "Move Fusion Applications Data".

3.5 Move Fusion Applications Data

Production-to-test movement for the transaction data includes the following steps. For each command, run preverify and correct any errors until preverify passes, then execute run.

3.5.1 Run Scripts to Pack Source Files

The production-to-test script must be installed on the production (source) server. You run packing scripts on the primordial Fusion Applications server and the Business Intelligence server.

When the packing is complete, the following files will reside on the Production working directory: weblayout.tgz,vault.tgz,webcatalog.tgz,obirpd.tgz, and rpdAttributes. DO NOT change the file/directory names!

3.5.1.1 Core Files

Run the following commands to pack core files on the primordial production server:

$WORKING_DIR/utilhome/bin/packData.sh <preverify or run> 

Note: The "preverify" commands validate the environment and connectivity only.

3.5.1.2 Business Intelligence (BI) Files

Run the following commands to pack Business Intelligence files on the BI production server. The script also connects to the BI Admin Server to extract the password of the credential map "oracle.bi.enterprise" and key "repository" in the Production server

$WORKING_DIR/utilhome/bin/packDataBI.sh <preverify or run>

Note: The "preverify" commands validate the environment and connectivity only.

3.5.2 Transfer Files to Target Servers

Transfer the packed files from Section 3.5.1 in the following way:

To the target/test primordial Fusion Applications server, transfer weblayout.tgz, vault.tgz, and webcatalogTr.tgz.

To the target/test BI server working directory, transfer obirpd.tgz, rpdAttributes

On the target BI Server working directory:

mkdir unpackrpd
cd unpackrpd
tar xzf ../obirpd.tgz

Note: The file/directory names must be exactly as documented.

3.5.3 Export Specific Data from Target (Test) System

The step preserves some of the data on the target system which will be re-imported after the production data is migrated.

  1. Under the Fusion Applications working directory, run the command
    mkdir datapumpdir_db .

  2. Ensure that the Functional Setup Server_1 in the Common domain is running, and the SOA servers in all domains are up and running.

  3. Run the command

    $WORKING_DIR/utilhome/bin/generateData.sh <preverify or run> 
    

    At the end of this process, all the Fusion Applications servers are automatically stopped.

For reference only, the following table lists the tasks accomplished in this step.

Topology Manager

This includes environment-specific topology data that must be preserved on Test, for example, external endpoint information for each deployed domain.

ExportFMWSchemas

The following schemas are backed up:
FUSION_IPM, FUSION_OTBI, FUSION_BIPLATFORM, FUSION_ORASDPLS, FUSION_ORASDPXDMS, FUSION_ORASDPSDS, FUSION_ORASDPM

SaveExternalWorkflowUrls

Backup front-end host URLs (Human Workflow external OHS configurations

exportAdfConfig

Backup adf-config.xml


3.5.4 Replace Target Data with Source Data

Perform a database backup of the source data, using whatever method your enterprise prefers: RMAN backup (along with installing the Oracle RDBMS Server binaries), file system copy, storage replication, VM snapshot, etc.

Adhere to the following requirements when duplicating the source database and mounting it to the destination environment:

  • For RAC installations, ensure that the Grid Infrastructure is installed on the destination.

  • Before duplicating, shut down the source Fusion Applications Web tier and application tier, as well as the Identity Management Web tier and application tier, and ensure that all in-flight transactions have been completed.

  • Shut down the source database cleanly (no abort). The clone/copy must be taken cold.

  • Remember that the topology and operating systems must be identical between source and destination.

3.5.5 Move Packed Data and Clean Up In-Flight Transactions

This step imports data from Section 3.5.1, Section 3.5.2, and Section 3.5.3 back in to the target (test) database. All long-running processes will be stopped and purged, to prevent the non-production system from sending emails or notifications to real users as if it were a production system.

There are three scripts to be run on the primordial Fusion Applications server, and two Business Intelligence scripts to run on the BI server.

  • While Fusion Applications is shut down, apply changes to the test environment:

    $WORKING_DIR/utilhome/bin/applyDataOffline.sh <preverify or run>
    
  • Restart the servers in the Fusion Applications stack:

    $WORKING_DIR/utilhome/bin/startAllServersMT.sh <preverify or run> 
    

    (It is also possible to use your own script/process to start the Fusion Applications stack.)

  • While Fusion Applications is running, apply changes to the test environment:

    $WORKING_DIR/utilhome/bin/applyDataOnline.sh <preverify or run>
    
  • If Business Intelligence is installed on a different server, run the following command from the BI server

    $WORKING_DIR/utilhome/bin/applyDataBI.sh <preverify or run>  
    
  • If Informatica is installed, run the following command from the server where it is installed:

    $WORKING_DIR/utilhome/bin/applyDataIIR.sh <preverify or run>
    

3.5.5.1 Validate

After completing the Fusion Applications production-to-test steps, restart the Fusion Applications stack again. All domains and managed servers must restart successfully. The system is ready for functional testing.