Allocate Production Keys
During the key assignment step, the system assigns a new key for each legacy key for a table that its prime-key requires a system-generated random key. The conversion process allocates new prime keys to take advantage of the system's parallel processing and data-clustering techniques in the production system (these techniques are dependent on randomly assigned, clustered keys).
The topics in this section provides some high level discussion about the key assignment process and describe the background processes used to assign production keys to the staging data.
The Old Key / New Key Table
It's important to understand that the system does not overwrite the prime-keys on the rows in the staging database, as this is a very expensive IO transaction. Rather, a series of tables exist that hold each row's old key and the new key that will be assigned to it when the row is transferred into the production database. We refer to these tables as the "old key / new key" tables.
The convention "<1st letter of owner flag>K_​<table_​name>" is used to denote the old key / new key table name. For example, the old key / new key table for CI_​ACCT is called CK_​ACCT.
The insertion batch process that transfers the rows into the production database use the new key for the main record of the key along with any other record where this key is a foreign key. In the same way, the XML resolution process resolves conversion foreign keys residing in XML storage fields and replaces them with their corresponding new keys from these table.
A Batch Process Per Table
A key assignment batch process is provided for each table that has a system-generated key and belongs to a maintenance objects that is eligible for conversion. The batch process is responsible for populating the corresponding old key / new key table (i.e., you don't have to populate these tables). These processes are single threaded. Refer to a table’s corresponding key assignment batch process for more information.
Key Assignment Dependencies
Most tables with system-generated keys do not inherit part of their key from another table’s key. Their corresponding key assignment batch process have no dependencies and can therefore be executed in any order you please.
Some tables inherit part of their key from another table’s key. Key assignment batch processes for such tables must be executed in key inheritance order. In other words, the key assignment process for a table should be run after the process that generates keys for the entity it depend on. The Conversion Entity Dashboard portal displays key inheritance dependency information if any exist for a conversion entity.
Note: 
You may run multiple key assignment batch process in parallel as long as they are independent with respect to key inheritance.
Iterative Conversions
Rather than perform a "big bang" conversion (one where all records of an entity are populated at once), some implementations have the opportunity to go live on subsets of their entity base. If this describes your implementation, please be aware that the system takes into account the existing prime keys in the production database before it allocates a new key value. This means when you convert the next subset of customers, you can be assured of getting clean and unique keys.
Key assignment logic creates the initial values of keys by a manipulation of the sequential row number, starting from 1. After any conversion run, a subsequent conversion run will start with that row number again at 1, and the possibility of duplicate keys being assigned will be higher. The key assignment batch process allows you to specify a starting value for that row number. The purpose of this parameter is to increase the value of row number by the given value, and minimize the chance of duplicate key assignment. This parameter is only used if you are performing conversions where data already exists in the tables in the production database.
Run Type
Key assignment is performed in two steps:
Initial Key Generation. The system allocates new keys to the rows in the staging tables (i.e., it populates the respective old key / new key table).
Duplicate Key Resolution. The system reassigns keys that are duplicates compared to production.
By default, both steps are performed at the same run but you have the option to run them separately by indicating which step to run via a batch parameter. The proper use of this parameter will greatly speed up the key assignment step as described under Recommendations To Speed Up Key Generation section.
Recommendations To Speed Up Key Generation
The following points describe ways to accelerate the execution of the key generation programs.
For a non-Cloud installation:
Make the size of your rollback segments large. The exact size is dependent on the number of rows involved in your conversion. Our research has shown that processing 7 million rows generates roughly 3GB of rollback information.
Setup the rollback segment(s) to about 10GB with auto extend to a maximum size of 20GB to determine the high water mark
A next extent value on the order of 100M should be used.
Make sure to turn off all small rollback segments (otherwise Oracle will use them rather than the large rollback segments described above).
After the key assignment programs execute, you can reclaim the space by:
Keep a low value for the "minimum extent" parameter for the rollback.
Shrink the rollback segments and the underlying data files at the end of the large batch jobs.
Compute statistics on the old key / new key tables after every 50% increase in table size. Key generation is performed in tiers or steps because of the inheritance dependency between some tables and their keys. Although key generation for the inheritance dependency tier currently being processed is performed by means of set-based SQL, computation of statistics between tiers will allow the database to compute the optimum access path to the keys being inherited from the previous tier's generation run.
Optimal use of the Run Type batch parameter.
Before any key assignments, alter both the "old key" F1_​CX_​ID index and the "new key" CI_​ID index on the old key / new key tables to be unusable.
Run all key assignment batch processes in tiers, submitting each job to only perform the Initial Key Generation step.
Rebuild the indexes on the old key / new key tables. Rebuilding the indexes using both the PARALLEL and NOLOGGING parameters will speed the index creation process. Statistics should be computed for these indexes.
Run all key assignment batch processes in the current tier that were previously run in initial key generation mode, to perform the Duplicate Key Resolution step. This will reassign all duplicate keys.
Note:
In a Cloud installation, refer to the “Data Conversion Support for Cloud Implementations” for more information about the various tools provided to support database related conversion tasks.