Best Practices for Item Import Performance

You can get the best results from your item imports by observing a set of best practices.

The recommended set of best practices for item import performance is:

  1. Determine the scope of data to be imported.

  2. Review the data model design.

  3. Create the item batches and load data.

  4. Set the profile options.

  5. Request a performance analysis.

  6. Plan strategies for data migration.

Determine the Scope Of Data To Be Imported

There are two aspects to determining the scope of data to be imported. The first aspect is the number of items (master plus child organization) and the second aspect is the total number of records.

Each item (master or child organization) is composed of multiple child objects which may result in multiple records being imported for a single item.

The total number of records is critical in determining the appropriate profile option settings.

The total number of records is obtained by counting the following for each item (master and child organization):

  • One additional record for each master controlled single row attribute group for each master item (additional records are created for translatable attribute groups)

  • One additional record for each organization controlled single row attribute group for each organization item combination (additional records are created for translatable attribute groups)

  • One additional record for each item category association

  • One additional record for each item attachment association

  • One additional record for each item relationship

  • One additional record for each row of a master-controlled multi-row attribute group for each master item (additional records are created for translatable attribute groups)

  • One additional record for each row of an organization-controlled multi-row attribute group for each master item (additional records are created for translatable attribute groups)

Review the Data Model Design

Often, a review of the data model design may offer opportunities to reduce the total number of records. The following should be considered.

Consider whether there's an opportunity to:

  • Reduce the number of item organization associations

  • Use reference organizations

  • Reduce the number of attribute groups

  • Reduce the number of rows in multi-row attribute groups

  • Convert organization-controlled attribute groups to master-controlled

Create the Item Batches and Load Data

The optimal number of records to load in a single run of a item batch is 100,000 items (master or child organization) and their related child entity records. However, the total number of records should not exceed 4 million records per batch.

If a large number of items are to be imported, first import the master items and the related child objects, prior to importing child organization items and their related child objects. If possible, avoid importing items assigned to different child organizations in the same batch.

Items with attribute data can be imported together in a single batch.

Attachments should be imported in a separate batch once all the other child objects have been imported.

In general, we recommend that you load the data into an item batch using a CSV file with an import map. There are some cases where it's more efficient to load items using FBDI. This is true when the ratio of attribute group records to item records is high.

Note:
  • It's recommended to import trading partner items separately and then import the trading partner item relationships.
  • If you're using FBDI to import data, then don't delete any sheet or tab in the FBDI Item Import template. If any sheet isn't used for the import, then leave that sheet bank. Use a batch ID that's new or doesn't exist in the interface table for item import; don't repeat batch IDs.

Purging the interface tables frequently improves performance. The tables should definitely be purged so that no more than 4 million records are in the interface tables (across all batches).

If the purge results in a large number of records being purged (greater than 1 million), then run a small batch of 5-10 items first before importing a batch with a larger number of items.

Important: Do not run multiple item batches in parallel. This can be done by setting the Process Sequentially spoke system option to Yes . The item import process already contains logic to process the workload in parallel. You gain no advantage by submitting parallel batches and may cause record locking issues. However, Process Sequentially should be set to No only when the number of records in the batches is low (in the hundreds) and when there are no large batches being imported during the same time period.

Set the Profile Options

Examine the values of profile options that affect import, and set them according to your situation.

Profile Option Code and Name

Usage

EGP_CONVERSION_MODE

Item Data Conversion Mode Enabled

To support data conversion and initial load, this profile option must be set to Yes.

The default profile option value is No. It has the following effects:

  • You can't import item revisions with a past date if the item is assigned to any existing change orders in Scheduled or Completed status.

  • You can still import item revisions with a past date if the item is assigned to any existing change orders in Open status.

Setting this profile option value to Yes will have the following effects.

  • The import process ignores all security checks. Data privilege and functional privilege aren't considered.

  • All items will be created as Approved Items regardless of whether the item class is enabled for new item requests or whether you select the option Add All Items for new item requests in the batch.

  • All updates will be directly posted to production by ignoring the creation of change orders if you select the option Add All Items for change orders in the batch.

  • All validation rules of the type Needs Approval will be ignored.

  • None of the creations or updates made to the items are audited.

  • Can result in stuck changes if you import item revisions with a past date and if the item is assigned to any existing change orders in Scheduled or Completed status.

EGP_ITEM_IMPORT_NUMBER_OF_THREADS

Number of Threads for Importing Items

The default value is 12. The value should not be set to more than 100. Once your environment has been resized, file a service request to get guidance from Oracle Support on what value should be set.

EGP_ITEM_IMPORT_ITEMS_PER_THREAD

Items Per Thread in the Import Program.

The default value is 20. To set the optimum value for this option, first compute the average ratio of the child object rows to item (master and child organization) rows in the batch being imported (based on the information in step 1).

  • If the ratio is between 0 and 20, keep the default value of 20.

  • If the ratio is between 20 and 500, change the value to 10.

  • If the ratio is 500 or greater, change the value to 5.

For example, if for each master/child item, there are 100 child object rows (such as extensible flexfields, category assignments, and so on), then the ratio is 1:100.

These values are guidelines. You can adjust the value with these guidelines in mind.

Request a Performance Analysis

If the number of records that you plan to import is greater than 1 million, or you determine the throughput to be low after following the recommendations here, file a service request for a performance analysis.

Ensure that you provide an accurate count of the total number of items (master and child organization) as well as the total number of records, broken down by the categories listed here. Also, indicate the number of days in which you would like to complete the data migration. This is very important to get the right size.

Plan Strategies for Data Migration

A common strategy for data migration is to perform the migration in stages or phases. This allows more time to migrate the data and resolve errors.

In the first stage, the items can be imported well ahead of the go-live deadline. As you get closer the to go-live date, obtain a list of legacy items which have been added or edited since the first set of data was migrated and load those changes just prior to go-live.

For initial load or large batches, it's recommended to:
  • Explicitly insert master organization rows and child organization rows in the import.
  • Not use rules to assign the items to the child organizations in case of initial data load as well as for batches that have large number of rows. This helps in avoiding performance issues as the import automatically processes the data in to parallel threads.