Tips and Important Mistakes to Avoid
 
Issue
Details
Run the process against the right target.
The data upload only runs if the environment is pointing to the STAGING schema.
Navigate to Conversion Support ' Switch Schema. On the popup screen the current schema is displayed. Make sure the current schema is Staging.
Provide data files according to the specifications
Regenerate the artifacts after modifying the data upload configurations
SQL Loader loads the data according to the Control File.
Input Data File Specifications describe what is expected from the input data file:
Names of the data files
Data format for all fields
Data delimiters to be used in the input data file
Every time the configuration has changed the artifacts must be regenerated in order to keep the configurations and the input data specifications in sync.
Provide input data files with CLOB data IF NECCESSARY
Conversion Instruction defines whether CLOB data is provided as part of the main file or as a separate file. The system expects the data files to be provided according to this definition.
Open the Input Data Specifications and read carefully. If the specification mentions that CLOB is to be provided as a secondary file, this is what Control File would inspect.
If you wish to include CLOB data in the main file, verify that the Conversion Instruction is set correctly.
If the configuration was modified you must regenerate the artifacts.
Avoid creating unnecessary data files for CLOB columns
By default the system expects the data to be provided for all target table columns.
If the table contains multiple CLOB columns AND the CLOB data is provided as a secondary file, it means one input data file per column.
To exclude unnecessary CLOB columns for a table or maintenance object, configure Conversion Instructions using the K1-ConvArtMultClobMOTaskType or K1-ConvArtMultClobTblTaskType business object and specify the Override Conversion Instruction on the Master Configuration
Regenerate conversion artifacts and examine the input data specifications after changing the configuration
Avoid truncating the entire staging data unintentionally
The K1-SCLTB batch process allows you to truncate a specific table or maintenance object in the STAGING schema.
The K1-CLNTB batch process allows you to truncate a specific table or maintenance object in the PRODUCTION schema.
If submitted without input parameter specifying a table or maintenance object, these batches will process all tables eligible for conversion. This means that all your staging data will be wiped out at once.
Clean up duplicate PK values before the data upload
Indexes and constraints are disabled during data upload in order to boost performance.
De-duplication during the data upload is not supported out-of-the-box
SQL Loader direct path upload doesn't perform duplicate check
No direct database access means no possibility to modify data via direct SQL after the upload
Keep track of the legacy data that has been already uploaded
If you re-upload the same data again, always clean up the target table(s).
The business configurations and admin data has to be finalized and populated in Production prior for legacy data upload
Populate the legacy data extract with valid FK references to the admin/control data
Once uploaded, the staging data cannot be "massaged"/modified thru direct SQL (that because no database access is possible on cloud)
Hence the overall conversion project steps are:
Design, test and complete business configurations. During this stage, multiple trial data uploads with dummy data could be performed
Populate admin data in Production
Create legacy data extract with valid admin data FK References
Upload data into staging tables
Key Tables are not populated implicitly
The Key Tables in the staging schema tables are not populated automatically when the legacy data is uploaded into "main" tables.
Upload the data into Key Tables separately or use the batch program provided by Cloud Service Foundation.
Override Conversion Eligibility is supported on Table level only
The conversion eligibility is overridden for individual tables. Override the eligibility for all the tables that belong to the maintenance object if you decided to convert the entire maintenance object.
Note: Overriding a table's conversion eligibility doesn't mean that the staging schema is automatically updated. It only means that the data upload processes will threat this table as a valid target table
Loading Data Directly into the Production (CISADM) Schema
The following configuration steps are required to load data directly into the Production (CISADM) schema tables:
Create a custom Control File for the target table.
Generate the control file with default conversion instructions and copy the contents.
Modify the INTO... clause to add 'CISADM.' in front of a target table name
Create a new Managed Content entry. Copy the entire control file text and save.
Create a new Conversion Task Type for the target Table
Specify the new Managed Content as an Override Control File.
In the Data Upload Support Master Configuration, create an Override Table Instruction entry. Specify the target table and the new Conversion Task Type.
Generate Conversion Artifacts for the table.
Loading Very Large Data Volumes
Avoid SQL-based conditions in the control file when loading very large data volumes. The default values and SQL-based conditions will cause SQL Loader to switch to the conventional path load which performs row-by-row inserts.
 
The best results are achieved with a direct path load.
 
More threads doesn’t necessarily mean better performance. The optimal overall data load performance is achieved when the threads (and their corresponding SQL Loader processes) are targeting different partitions.
 
Additional guidelines:
Partitioning by month is required for best performance
Load multiple months in parallel for best performance & scalability
Start with ONE thread per month
Increment number of threads per month. If performance does not increase, try a smaller increment or stay with your last best. For example: loaded 12 months data with 48 threads, 4 threads/month
Large data files are preferable
Many small files have the overhead of spinning up new SQL*Loader process for each file
Set longer SQL timeout on the data upload batch process
Disable indexes before loading
Rebuild indexes after direct path load
Reduce or stop the activities in the environment when performing the massive data upload