H Appendix: Frequently Asked Questions

This section calls out the possible encountered issues while using the tool and suggests solutions for these.

Issue Potential Solution

Why is my file not listed in the Import from File screen?

Validate that the file is present in the configured folder.

Validate that the file pattern matches the prescribed pattern.

Ensure the rules described above for data file preparations are followed.

How to achieve optimal performance for data conversion?

To obtain better performance and seamless conversion using the data conversion tool, it is highly recommended to follow the best practices specified in this document. See "Appendix: Best Practices" for more details.

Why was one of the files in the zipped file not processed?

Validate that the file pattern of the .dat file matches the prescribed pattern.

I do not wish to load data for a table that is a part of the zip file. Can it be excluded?

Yes, such files can be excluded. If there is a dependent file that has been missed in the zip file, then while running the business validations, it will be called out as errors.

I'm getting an exception while clicking on the file list in import screen. How do I resolve this issue?

Verify if the upload folder set in DC_SYSTEM_OPTIONS table matches to the configured upload folder in the app server

Why is the halt button on the Mass Upload not enabled?

This will be enabled only for certain conditions to prevent data corruption. For more details see the "Mass Upload" section above.

How do I verify if the records have actually gotten loaded into the staging tables from the screen?

The statistics on the import screen and list of errors would help identify if records were not processed.

How do I verify if records are actually loaded into main Merchandising tables?

The "View Uploaded Data" screen can be used to validate the data loaded. Additionally, the APEX Data viewer can also be used.

How do I revert/modify the records loaded in the staging tables from the UI?

You would need to correct the data in the source files and re-run as a fresh upload with the modified data without processing the earlier loaded data through to the end of the process.

How do I revert/modify the records loaded into the main tables from the UI?

Once data is loaded into the main Merchandising tables in production, the only way to modify it is via Merchandising supported methods - UIs, spreadsheet uploads, and so on. If it is still in the pre-production environment, then the APEX Data Viewer can be used to update the data as well.

If a Merchandising application is running on the same data conversion environment, can I open and work in the application?

There is nothing stopping users from opening the Merchandising applications connected to the same Merchandising schema where conversion is being done. But if any data is modified by users, then there could be instances where the sequences are not in sync with the loaded data and this could lead to errors. It is recommended all users be kept out of the Merchandising screens while running conversion.

How can transactions (transfers, PO, and so on) be loaded?

At this time, purchase orders and customer orders are the only transactions that can be converted using this application. It is recommended other transactions be closed in your legacy applications prior to conversion and new transactions be created in Merchandising when needed. See also the "Data Entities" section.

How can flex attributes be converted?

To convert the data, the attributes will first need to be configured in both your stage and production environments. For more information on this functionality, see the Customization and Extension Guide.

Does the offline validator verify file name patterns?

No.

What is the maximum file size supported for upload?

Importing data of file size of less than 1GB is supported. However, processing large volumes takes up a lot of memory and reduces performance, especially where complex business validations are executed. Therefore, it is recommended that, in cases of large files, the data are broken up into multiple files.

The maximum number of records in file supported will depend on the entity being loaded.

If we have multiple files for an entity, could the files be loaded together?

Yes. For this, place these files alone in the upload folder and use the Mass upload screen to trigger an upload at the entity group level. This will process the files placed in the Upload folder.

Can a zip file have two dat files for the same table?

No, if you have multiple files for the same table and are zipping files to load dependent tables together, you need to have one zip file per table/dat file for the dependent tables.

What will happen if I have two files with the same name?

While uploading files, the second file will overwrite the first.

If the filename does not match the pattern, what would happen?

Such files will not be listed in the import screen and the mass upload will ignore/skip the files.

How long does it take to download the templates?

This usually takes about 60 seconds, but could be longer based on network latency.

Are there performance issues during import?

If there are too many errors, then it may slow down the data processing. It is recommended to truncate the error table, DC_FILE_ERRORS, at regular intervals. Also, the offline validator should be used to verify the input files to identify errors related to data type mismatch, size mismatch and check constraint violations prior to starting the conversion process.

Why does the file processing freeze at import?

Files being stuck at the Import stage is usually due to an issue with the file and not related to the performance or the data migration tool. Please ensure below points to avoid this issue,

  • Avoid zero byte or empty files in the zip file.

  • The zip file should contain only .dat files with filenames expected for the entity.

  • The zip file should not contain a folder inside it.

  • For example, the naming pattern for the DEPS table is defined as deps*.dat. The * in the name can be replaced by a number (such as deps1.dat, deps2.dat, and so on) or just left off, depending on your conversion plans

  • Name the zip file uniquely ITEM*.zip, where * in the name can be replaced by a number or just left off, depending on your conversion plans. For example, ITEM12.zip rather than ITEM.zip for all the files. This helps while debugging if one or more files are corrupted.

  • If the file has a large number of duplicates, then it could lead to locks as the records are processed in parallel.

If you still run into this issue, upload the file that has encountered this issue in the SR to enable further investigation.

Why is the First line-field is rejected as INVALID NUMBER?

This would happen if the encoding of the .dat file is UTF-8 BOM instead of UTF-8. The UTF-8 BOM adds a sequence of bytes at the start of a text stream. The offline validator and the data conversion tool would reject such a starting line during import if the first field is expected to be a number. If the first field is to be a varchar2, then it would accept the junk characters and include them in the field's data, which would get caught only if there are business validations on that field which check any dependencies.

You should ensure the data file encoding is UTF-8. You can change file encoding either manually through apps like Notepad++ or in the source file generation code.

What is the time benchmark for each entity?

Environment factors including sizing, data setup will impact the timings.

How do I to monitor progress of large volume entities?

You can monitor DB session for the data conversion tool using a view V_DC_SESSION_INFO.

For item and purchase order entities, you can also query the following tables from the APEX Data Viewer:

  • SVC_PROCESS_CHUNKS

  • CORESVC_PO_CHUNKS

How do I check the time taken for file processing?

The DC_PROCESS_TRACKER table will hold the details for the time taken during each stage in the data conversion. The data conversion data model can be referred to understand the tables that can be looked up for analysis.

What values need to be set for thread count and chunk size on entities?

This would be based on:

  • Number of threads your environment can manage based on your machine size. See "Appendix: Best Practices" section above for more details.

  • Data volume in the import files also plays a role to define the optimum number for thread and chunk size. You may need to do a couple of mocks runs to reach the optimum number that suits your data load.

How to identify and correct data when the entire chunk is rejected?

There are a few validations that can cause the entire chunk to be rejected due to errors in one or more records. To isolate and correct these records, execute the entity with a chunk size of 1.

Can I perform database operational tasks like enable or disable a trigger, kill a DB session, etc.?

The task execution engine in the data conversion tool invokes supported database operations which otherwise require assistance from the Oracle Cloud Operations team. See "Task Execution Engine" section above for more details.