11 Exporting and Importing OSM Schema Data

This chapter provides information about how to selectively export and import schema data, which include orders and model data (cartridge data), from an Oracle Communications Order and Service Management (OSM) database.

About Exporting and Importing OSM Schema Data

You can export an OSM schema using standard Oracle Database tools. Exporting OSM schema data is useful for providing support personnel with the information that they require to troubleshoot a failed order, or range of orders. You can export an entire OSM schema, however it is typically very large, with hundreds of thousands of orders. Exporting a large schema is rarely practical because of its size, the resources it would consume, and the time it might take to export. It is more feasible to selectively export specific orders or a range of orders.

This chapter provides information about exporting and importing order and model data from one database to another using the OSM export/import utility package. Exporting only the data that is causing an issue allows support to more quickly locate and resolve issues. You can use the utilities in the package to dynamically generate the files required for exporting data, and to prepare and finalize the schema for import.

Note:

The utilities do not provide an effective means of backing up and restoring database data. For more information about how to do this, see "Backing Up and Restoring OSM Files and Data".

You can follow several scenarios to export and import OSM data, depending on the reason and the type of data that you need.

Exporting and Importing the OSM Model Data Only

You can export and then import only the OSM model data (cartridge data). To do this, you use the export/import utilities to generate a PAR file with options for exporting the OSM model.

Note:

Model tables that are system-managed, or that contain system-managed parameters, are excluded from the export because these tables are created by the database installer and already exist in the target schema.

Exporting OSM Model Data

To export model data only:

  1. Open a terminal and log in to SQL*Plus as the OSM core schema user.

  2. Verify that the export/import utility package (om_exp_imp_util_pkg) exists by running the following command:

    select count(*) from user_objects where object_type = 'PACKAGE' and OBJECT_NAME = 'OM_EXP_IMP_UTIL_PKG';
     
      COUNT(*)
    ----------
             1

    If the count that is returned is 1, the package exists and you do not need to create it. If the count is zero, you must create the package.

    Note:

    If the export/import utility package does not exist in the database, you can run the installer to create it. For information about running the installer, see OSM Installation Guide.

    When you run the OSM installer, make sure that you select the "Database Schema" component to create the export/import utility package.

  3. Open another terminal and take the OSM server offline.

  4. Return to the original terminal and allow the output to be generated by running the following command:

    set serveroutput on
  5. Prevent extra white space in the generated PAR file by running the following command:

    set trimspool on

    Extra white space in the PAR file causes the export to fail.

  6. Specify the file where you want to capture the generated output by running the following command. The database creates this file in the folder you were in when you logged into the sqlplus session:

    spool osm_expdp_model_parfile.txt
  7. Export the model by running the om_exp_imp_util_pkg.generate_expdp_model_parfile procedure:

    exec om_exp_imp_util_pkg.generate_expdp_model_parfile
  8. Stop capturing the generated output by running the following command:

    spool off
  9. Log off the SQL*Plus session. For example:

    exit
  10. Using a text editor, remove the following lines from the osm_expdp_model_parfile.txt file:

    • exec om_exp_imp_util_pkg.generate_expdp_model_parfile

    • PL/SQL procedure successfully completed.

    • spool off

  11. (Optional) Modify PAR file parameters as needed in osm_expdp_model_parfile.txt file. For more information, see "Changing PAR File Parameters".

  12. Export the model tables by running the following command:

    expdp SourceOSMCoreSchemaUserName PARFILE=osm_expdp_model_parfile.txt
  13. Print the schema data to a text file by doing the following:

    1. Log in to SQL*Plus as the OSM core schema user.

    2. Allow the output to be generated by running the following command:

      set serveroutput on
    3. Specify the file where you want to capture the generated output by running the following command:

      spool osm_expdp_schema_info.txt
    4. Print the schema by running the om_exp_imp_util_pkg.print_schema_info procedure:

      exec om_exp_imp_util_pkg.print_schema_info;
    5. Stop capturing the generated output by running the following command:

      spool off

      Note:

      Keep this text file with the OSM database dump (dmp) files because the file contains information that will be used when you import the dmp files.

      The Data Pump Export utility unloads data and metadata into a set of operating system files called a dump file set, which is then imported using the Data Pump Import utility. For more information about Oracle Data Pump utility and the dmp files, see "Oracle Data Pump Export" in Oracle Database Utilities.

  14. Restart the OSM server you exported the data from.

Preparing the Target OSM Schema Before Import

Before you import the OSM model data that you exported from a source schema, you must prepare the OSM target schema to which you want to import that data.

Creating the Target OSM Schema

If the target schema does not already exist, run the OSM installer (of the same OSM version as the source schema) to create it. If you require a fully functional environment, you must also manually create any Weblogic resources such as users, groups, any custom queues, and so on.

Adding Partitions

If the target schema is partitioned and you are importing order data, you must ensure that the order IDs that you want to import map to existing partitions. If any order ID is greater than or equal to the greatest partition upper bound, add one or more partitions as needed.

If the source schema uses partitioning realms (for example, for short-lived orders), the size and order of the partitions and the partitioning realms that they are associated with on the source system, must be duplicated on the target system. Partitions associated with partitioning realms must be added after OSM model data is imported because the partitioning realms are imported with OSM model data. See "Partitioning Realms" for more information.

If the source schema has the partitions and partitioning realms that are shown in Table 11-1, for example, the same size of partitions and realms must be created on the target schema. For this example, you run the following commands after you create the partitioning realms in the target schema using OSM model import:

exec om_part_maintain.add_partition(a_tablespace, 'default_order');
exec om_part_maintain.add_partition(a_tablespace, 'short_lived_orders');

where a_tablespace is the tablespace for new partitions. This procedure modifies the default tablespace attribute of partitioned tables with the specified tablespace before adding partitions. If you do not specify the tablespace, or if the a_tablespace value is null, each partition is created on the default tablespace of the partitioned table (for example, on the same tablespace as the most recently added partition).

Table 11-1 shows an example of a set of partitions that use the partitioning realms feature.

Table 11-1 Example Set of Partitions That Use Partitioning Realms

Partition Name Partition Size order_seq_id Lower Bound order_seq_id Upper Bound Partitioning Realm

P_000000000000100001

100000

1

100000

default_order

P_000000000010600001

500000

10100001

10600000

short_lived_orders

For information about using partitioning realms, see "Partitioning Realms". For more information about adding partitions, see "Managing the OSM Database Schema".

To add partitions to a schema:

  1. Log in to SQL*Plus as the OSM core schema user.

  2. Add partitions by running the following command:

    exec om_part_maintain.add_partitions(n, tablespace, realm_mnemonic)
    

    where:

    (n) is the number of partitions that you want to add

    (tablespace) is the tablespace for the new partition(realm_mnemonic) is the mnemonic of the partitioning realm associated with this partition. This value is used only for schemas that use partitioning realms. The default realm is default_order if this value is not provided.

  3. Query the user_tab_partitions table to view the newly added partitions.

    select * from user_tab_partitions where table_name = 'OM_ORDER_HEADER' ;

Note:

If you are not on the latest version of your OSM release, the add_partitions procedure might not work. In this case, see "Adding Partitions (Online or Offline)" for information about how to add partitions.

Importing OSM Model Data

After you prepare the target schema, import the OSM model data and finalize the target schema by running the target schema setup script.

For information about Oracle Data Pump and the dmp files, see Oracle Database documentation.

To import OSM model data:

  1. Open a terminal and take the OSM server offline.

  2. (Optional) Purge the existing OSM model by running the following command:

    exec om_exp_imp_util_pkg.purge_model

    Note:

    Purging the data before importing it to the target schema ensures there are no constraint violations when importing duplicate data.

  3. Open another terminal, log in to SQL*Plus as the OSM core schema user.

  4. Disable constraints and triggers, and stop jobs by running the following command:

    exec om_exp_imp_util_pkg.pre_impdp_setup
    

    This command ensures that there are no errors when importing OSM data.

  5. From another terminal, import the model tables. For example:

    impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR DUMPFILE=osm_expdp_model%U.dmp LOGFILE=osm_impdp_model.log REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace
    
  6. Finalize the target OSM schema by running the following command:

    exec om_exp_imp_util_pkg.post_impdp_setup
    

    This enables the constraints and triggers, and resubmits the jobs that were disabled and stopped before importing the OSM data.

  7. Restart the OSM server.

Exporting and Importing the OSM Model and a Single Order

In some cases, you might want to analyze a single known order from a large schema, and you can selectively export that specific order. To extract a single order for analysis in another environment, you need to export both the order and the model. First you export the order and then you export the model.

Exporting OSM Order Data

Order data is saved in order tables. To select the order data that you want to export, query the order ID. Most tables store the order ID in the order_seq_id column. If there are exceptions that use a different name for the order ID column, you must export and import these separately.

Preparing to Export Order Tables for a Single Order

To prepare to export order tables:

  1. Log in to SQL*Plus as the OSM core schema user.

  2. Verify that the export/import utility package (om_exp_imp_util_pkg) exists by running the following command:

    select count(*) from user_objects where object_type = 'PACKAGE' and OBJECT_NAME = 'OM_EXP_IMP_UTIL_PKG';
     
      COUNT(*)
    ----------
             1

    If the count that is returned is 1, the package exists and you do not need to create it.

    Note:

    If the export/import utility package does not exist in the database, you can run the installer to create it. For information about running the installer, see OSM Installation Guide.

    When you run the OSM installer, make sure that you select the "Database Schema" component to create the export/import utility package.

  3. Verify that the order that you want to export is not open by running the following SQL commands:

    set serveroutput on
    exec om_exp_imp_util_pkg.print_open_order_count(a_min_order_id => orderid_min, a_max_order_id => orderid_max);

    where both orderid_min and orderid_max are the ID of the order that you want to export. For example: a_min_order_id => 123456, a_max_order_id => 123456. For more information, see "About Order Export Queries".

    The following message is displayed:

    There are no open orders

    If the order specified is open and the server should be taken offline, the following message is displayed:

    The open order count is: 1

    Note:

    Oracle recommends that you always check for open orders before you export order data.

  4. If the order that you want to export is open, take the OSM server offline.

  5. Allow the output to be generated by running the following command:

    set serveroutput on
  6. Prevent extra white space in the generated PAR file by running the following command:

    set trimspool on

    Extra white space in the PAR file causes the export to fail.

Exporting Order Tables That Define an Order Sequence ID

To export order tables that define an ID in the order_seq_id column:

  1. Follow all the steps of the procedure in "Preparing to Export Order Tables for a Single Order".

  2. Specify the file where you want to capture the generated output by running the following command:

    spool osm_expdp_orders_parfile.txt
    
  3. Export orders by running the om_exp_imp_util_pkg.generate_expdp_order_parfile procedure.

    exec om_exp_imp_util_pkg.generate_expdp_order_parfile;
    
  4. Stop capturing the generated output by running the following command:

    spool off
    
  5. Modify the PAR file option query in osm_expdp_orders_parfile.txt to select the single order that you want to export.

    For example, modify where order_seq_id > 0 to where order_seq_id = 123456; where 123456 is the ID of the order that you want to export. For more information, see "About Order Export Queries".

  6. (Optional) Modify PAR file parameters as needed in osm_expdp_model_parfile.txt. For more information, see "Changing PAR File Parameters".

  7. Export the order tables by running the following command:

    expdp SourceOSMCoreSchemaUserName PARFILE=osm_expdp_orders_parfile.txt
    
  8. Restart the OSM server.

Exporting the OSM Model Data

After you export the single order, export the OSM model data. For more information, follow the procedure to export the model data in the topic "Exporting and Importing the OSM Model Data Only".

Note:

Model tables that are system-managed, or contain system-managed parameters, are excluded from the export because these tables are created by the database installer and already exist in the target schema.

Importing the OSM Model and Order Data

Before you import the OSM model data, prepare the target schema. For more information, see "Preparing the Target OSM Schema Before Import".

After you prepare the target schema, you import first the OSM model data, and then the OSM order data that you previously exported. Most tables store the order ID in the order_seq_id column. There are a few exceptions that you must export and import separately.

To import the OSM data:

  1. Stop the OSM server that you want to import to.

  2. (Optional) Purge existing OSM orders by running the following command:

    exec om_exp_imp_util_pkg.purge_orders

    Note:

    The orders must match the model. Purging the data before importing it to the target schema ensures there are no constraint violations when importing duplicate data.

  3. (Optional) Purge the existing OSM model by running the following command:

    exec om_exp_imp_util_pkg.purge_model

    Note:

    Purging the data before importing it to the target schema ensures there are no constraint violations when importing duplicate data.

  4. Log in to SQL*Plus as the OSM core schema user.

  5. Disable constraints and triggers, and stop jobs by running the following command:

    exec om_exp_imp_util_pkg.pre_impdp_setup
    

    Running this command ensures that there are no errors when importing OSM data.

  6. Import the model data by running the following command:

    impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR DUMPFILE=osm_expdp_model%U.dmp LOGFILE=osm_impdp_model.log REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace
    

    For more information about these parameters, see "About Import Parameters".

  7. Import order tables that define an order sequence ID by running the following command:

    impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR DUMPFILE=osm_expdp_orders%U.dmp LOGFILE=osm_impdp_orders.log REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace TRANSFORM=oid:n
    

    For more information about these parameters, see "About Import Parameters".

  8. Finalize the OSM target schema by running the following command:

    exec om_exp_imp_util_pkg.post_impdp_setup
    

    This enables the constraints and triggers, and resubmits the jobs that were disabled and stopped before importing the OSM data.

  9. Restart the OSM server.

Exporting and Importing a Range of Orders and the OSM Model

You can replicate an OSM schema using a subset, or range, of orders from a large schema.

Exporting the OSM Order Data Range

Order data is saved in order tables. To select the order data that you want to export, query the order ID. Most tables store the order ID in the order_seq_id column. There are a few exceptions, which must be exported and imported separately.

Print the schema data. For more information see, "Exporting OSM Order Data".

Preparing to Export Order Tables for a Range of Orders

To prepare to export order tables:

  1. Log in to SQL*Plus as the OSM core schema user.

  2. Verify that the export/import utility package (om_exp_imp_util_pkg) exists by running the following command:

    select count(*) from user_objects where object_type = 'PACKAGE' and OBJECT_NAME = 'OM_EXP_IMP_UTIL_PKG';
     
      COUNT(*)
    ----------
             1

    If the count that is returned is 1, the package exists and you do not need to create it.

    Note:

    If the export/import utility package does not exist in the database, you can run the installer to create it. For information about running the installer, see OSM Installation Guide.

    When you run the OSM installer, make sure that you select the Database Schema component to create the export/import utility package.

  3. Verify that none of the orders that you want to export are open by running the following SQL commands:

    set serveroutput on
    exec om_exp_imp_util_pkg.print_open_order_count(a_min_order_id => orderid_min);
    

    where orderid_min is the minimum bound value of a range of order IDs. For more information, see "About Order Export Queries".

    The following message is displayed:

    There are no open orders
    

    If any of the orders within the range specified are open and the server should be taken offline, the following message is displayed:

    The open order count is: n
    

    where n is the number of open orders.

    Note:

    Oracle recommends that you always check for open orders before you export data.

  4. If any of the orders that you want to export are open, take the OSM server offline.

  5. Allow the output to be generated by running the following command:

    set serveroutput on
    
  6. Prevent extra white space in the generated PAR file by running the following command:

    set trimspool on
    

    Extra white space in the PAR file causes the export to fail.

Exporting Order Tables That Define an Order Sequence ID

To export order tables that define an order sequence ID:

  1. Follow all the steps of the procedure in "Preparing to Export Order Tables for a Single Order".

  2. Specify the file where you want to capture the generated output by running the following command:

    spool osm_expdp_orders_parfile.txt
    
  3. Export the orders by running the om_exp_imp_util_pkg.generate_expdp_order_parfile procedure. For example:

    exec om_exp_imp_util_pkg.generate_expdp_order_parfile(a_min_order_id => 100000);
     
    DIRECTORY=DATA_PUMP_DIR
    DUMPFILE=osm_expdp_orders%U.dmp
    FILESIZE=1GB
    LOGFILE=osm_expdp_orders.log
    CONTENT=DATA_ONLY
    PARALLEL=1
    QUERY=" where order_seq_id >= 100000"
    TABLES=(
    OM_ATTACHMENT,
    OM_HIST$DATA_CHANGE_NOTIF,
    OM_HIST$FALLOUT,
    OM_HIST$FALLOUT_NODE_INSTANCE,
    OM_HIST$FLOW,
    OM_HIST$NOTIFICATION,
    OM_HIST$ORDER_HEADER,
    OM_HIST$ORDER_INSTANCE,
    OM_HIST$ORDER_STATE
    ...
    OM_JMS_THREAD,
    OM_SYSTEM_EVENT
    )
     
    PL/SQL procedure successfully completed.
    
  4. Run the following command, which stops capturing the generated output:

    spool off
    
  5. (Optional) Modify PAR file options as needed in osm_expdp_model_parfile.txt. For more information, see "Changing PAR File Parameters".

  6. Export the order tables by running the following command:

    expdp SourceOSMCoreSchemaUserName PARFILE=osm_expdp_orders_parfile.txt
    
  7. Restart the OSM server.

Exporting the OSM Model Data

After you export the orders, you can export the OSM model data. For information about exporting the model data, follow the procedure in the topic "Exporting and Importing the OSM Model Data Only".

Note:

Model tables that are system-managed, or contain system-managed parameters, are excluded from the export because these tables are created by the database installer and already exist in the target schema.

Importing OSM Model and Order Data

Before you import the OSM model data, prepare the target schema. For more information, see "Preparing the Target OSM Schema Before Import".

After you prepare the target schema, you import first the OSM model data, and then the OSM order data that you previously exported. As with the export procedure, most tables store the order ID in the order_seq_id column. There are a few exceptions that must be exported and imported separately. Use the order table import procedure that corresponds to the table in which the order ID that you want to import is stored.

To import the OSM data:

  1. Stop the OSM server that you want to import to.

  2. (Optional) Purge existing OSM orders by running the following command:

    exec om_exp_imp_util_pkg.purge_orders

    Note:

    The orders must match the model. Purging the data before importing it to the target schema ensures there are no constraint violations when importing duplicate data.

  3. (Optional) Purge the existing OSM model by running the following command:

    exec om_exp_imp_util_pkg.purge_model

    Note:

    Purging the data before importing it to the target schema ensures there are no constraint violations when importing duplicate data.

  4. Log in to SQL*Plus as the OSM core schema user.

  5. Disable constraints and triggers, and stops jobs by running the following command:

    exec om_exp_imp_util_pkg.pre_impdp_setup
    

    Running this command ensures that there are no errors when importing OSM data.

  6. Import order tables that use a different name for the order ID column by running the following command:

    impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR DUMPFILE=osm_expdp_model%U.dmp LOGFILE=osm_impdp_model.log REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace
    

    For more information about these parameters, see "About Import Parameters".

  7. Import order tables that define an order sequence ID by running the following command:

    impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR DUMPFILE=osm_expdp_orders%U.dmp LOGFILE=osm_impdp_orders.log REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace TRANSFORM=oid:n
    

    For more information about these parameters, see "About Import Parameters".

  8. Finalize the OSM target schema by running the following command:

    exec om_exp_imp_util_pkg.post_impdp_setup
    

    This enables the constraints and triggers, and resubmits the jobs that were disabled and stopped before importing the OSM data.

  9. Restart the OSM server.

Exporting and Importing a Range of OSM Orders Only

If you have a target schema that already contains a subset of orders and the OSM model data, you can use the information in this section to export and import a range of OSM orders only.

Note:

This section does not provide information about importing OSM model data to a target schema. If you want to do that, see "Importing OSM Model Data."

Exporting an Additional Range of Orders

Order data is saved in order tables. To select the range of order data that you want to export, query the order IDs at each end of the range. Most tables store the order ID in the order_seq_id column. There are a few exceptions, which must be exported and imported separately.

Preparing to Export Order Tables for a Range of Orders

To prepare to export order tables for a range of orders:

  1. Log in to SQL*Plus as the OSM core schema user.

  2. Verify that the export/import utility package (om_exp_imp_util_pkg) exists by running the following command:

    select count(*) from user_objects where object_type = 'PACKAGE' and OBJECT_NAME = 'OM_EXP_IMP_UTIL_PKG';
     
      COUNT(*)
    ----------
             1

    If the count that is returned is 1, the package exists and you do not need to create it.

    Note:

    If the export/import utility package does not exist in the database, you can run the installer to create it. For information about running the installer, see OSM Installation Guide.

    When you run the OSM installer, make sure that you select the Database Schema component to create the export/import utility package.

  3. Verify that the orders that you want to export are not open by running the following SQL commands:

    set serveroutput on
    exec om_exp_imp_util_pkg.print_open_order_count(a_min_order_id => orderid_min);

    where orderid_min is the minimum bound value of a range of order IDs. For more information, see "About Order Export Queries".

    If the orders within the range specified are not open and the server does not need to be taken offline, the following message is displayed:

    There are no open orders

    If any of the orders within the range specified are open and the server should be taken offline, the following message is displayed:

    The open order count is: n

    where n is the number of open orders.

    Note:

    Oracle recommends that you always check for open orders before you export data.

  4. If any of the orders that you want to export are open, take the OSM server offline.

  5. Allow the output to be generated by running the following command:

    set serveroutput on
  6. Prevents extra white space in the generated PAR file by running the following command:

    set trimspool on

    Extra white space in the PAR file causes the export to fail.

Exporting a Range of Orders from Order Tables That Define an Order Sequence ID

To export a range from order tables that define an order sequence ID:

  1. Follow all the steps of the procedure in "Preparing to Export Order Tables for a Range of Orders".

  2. Specify the file where you want to capture the generated output by running the following command:

    spool osm_expdp_orders_parfile.txt
    
  3. Export orders by running the om_exp_imp_util_pkg.generate_expdp_order_parfile procedure. For example:

    exec om_exp_imp_util_pkg.generate_expdp_order_parfile(a_min_order_id => 100000, a_max_order_id => 200000);
     
    DIRECTORY=DATA_PUMP_DIR
    DUMPFILE=osm_expdp_orders%U.dmp
    FILESIZE=1GB
    LOGFILE=osm_expdp_orders.log
    CONTENT=DATA_ONLY
    PARALLEL=1
    QUERY=" where order_seq_id between 100000 and 200000"
    TABLES=(
    OM_ATTACHMENT,
    OM_HIST$DATA_CHANGE_NOTIF,
    OM_HIST$FALLOUT,
    OM_HIST$FALLOUT_NODE_INSTANCE,
    OM_HIST$FLOW,
    OM_HIST$NOTIFICATION,
    OM_HIST$ORDER_HEADER,
    OM_HIST$ORDER_INSTANCE,
    OM_HIST$ORDER_STATE
    ...
    OM_JMS_THREAD,
    OM_SYSTEM_EVENT
    )
     
    PL/SQL procedure successfully completed.
    
  4. Stop capturing the generated output by running the following command:

    spool off
    
  5. (Optional) Modify PAR file parameters as needed in osm_expdp_orders_parfile.txt. For more information, see "Changing PAR File Parameters".

  6. Export the order tables by running the following command:

    expdp SourceOSMCoreSchemaUserName PARFILE=osm_expdp_orders_parfile.txt
    
  7. Restart the OSM server.

You can use a variety of queries to selectively export the orders that you want. For more information, see "About Order Export Queries".

Importing an Additional Range of Orders

Before you import the OSM order data, prepare the target schema. For more information, see "Preparing the Target OSM Schema Before Import."

After you prepare the target schema, you import first the OSM order data that you previously exported. As with the export procedure, most tables store the order ID in the order_seq_id column. There are a few exceptions, which must be exported and imported separately. Use the order table import procedure that corresponds to the table in which the order ID that you want to import is stored.

To import order data:

  1. Stop the target OSM server.

  2. Log in to SQL*Plus as the OSM core schema user.

  3. Disable constraints and triggers, and stops jobs by running the following command:

    exec om_exp_imp_util_pkg.pre_impdp_setup

    Running this command ensures that there are no errors when importing OSM data.

  4. Import order tables that use a different name for the order ID column by running the following command:

    impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR DUMPFILE=osm_expdp_orders%U.dmp LOGFILE=osm_impdp_orders.log REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace TRANSFORM=oid:n

    For more information about these parameters, see "About Import Parameters".

  5. Import order tables that define an order sequence ID by running the following command:

    impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR DUMPFILE=osm_expdp_orders%U.dmp LOGFILE=osm_impdp_orders.log REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace TRANSFORM=oid:n

    For more information about these parameters, see "About Import Parameters".

  6. Finalize the target OSM schema by running the following command:

    exec om_exp_imp_util_pkg.post_impdp_setup

    This enables the constraints and triggers, and resubmits the jobs that were disabled and stopped before importing the OSM data.

  7. Restart the OSM server.

About Order Export Queries

You can use a variety of queries to select the orders that you want to export. Table 11-2 provides examples of queries that you can use to select the orders to export.

Table 11-2 Example Queries for Selecting Orders to Export

Example Query Data to Export

exec om_exp_imp_util_pkg.generate_expdp_order_parfile;

Exports all orders when exporting order tables that define an order sequence ID.

You can also use this example and edit the PAR file to customize the QUERY option.

exec om_exp_imp_util_pkg.generate_expdp_order_parfile(a_max_order_id => 2000);

This example exports all orders below order ID 2000 for order tables that define an order sequence ID.

exec om_exp_imp_util_pkg.generate_expdp_order_parfile(a_min_order_id => 1000);

This example exports all orders above order ID 1000 for order tables that define an order sequence ID.

exec om_exp_imp_util_pkg.generate_expdp_order_parfile(a_min_order_id => 1000, a_max_order_id => 2000);

This example exports all orders between order ID 1000 and order ID 2000 for order tables that define an order sequence ID.

Changing PAR File Parameters

You can change the parameters in the generated export parameter file (PAR file). The PAR file is generated in the directory where you start SQL*Plus.

Table 11-3 describes the parameters in the PAR file.

Note:

There are other parameters in the PAR file but if you change these, the export will not be successful.

For more information about the parameters that are available in the command line mode of the data pump export, see "Oracle Data Pump Export" in Oracle Database Utilities.

Table 11-3 PAR File Parameters

PAR File Parameter Default Description

CONTENT

ALL

Enables you to filter what export unloads: data only, metadata only, or both.

Note: If you change this parameter in any of the export scenarios in this chapter, the export will not be successful.

DIRECTORY

DATA_PUMP_DIR

Specifies the directory to which export writes the dump file and the log file.

Note: For exports or imports performed in an Oracle RAC environment using Automatic Storage Management, change this parameter to point to the shared location. For more information, see "Oracle RAC Considerations" in the Oracle Database Data Pump documentation.

Note: If you are using a pluggable database (PDB) you cannot use the default DATA_PUMP_DIR directory. You must specify a different variable and directory for the PDB instance. Each PDB instance that you want to use expdb and impdp with must have its own data pump directory.

The following sqlplus commands create and set permissions on a new variable PDB directory variable:

create directory pdb_variable_name as 'path';
grant read, write on  directory  pdb_variable_name to osm_core_schema;

where:

  • pdb_variable_name is the name of the PDB directory variable

  • path is the path to the data pump directory (for example /samplepath/pdbdatapumpdir)

  • osm_core_schema is the core OSM schema (for example ordermgmt)

DUMPFILE

osm_expdp_model%U.dmp

Specifies the name and, optionally, the directory of objects of dump files for an export job.

FILESIZE

1 GB

Specifies the maximum size of each dump file.

INCLUDE

N/A

Enables you to filter the metadata that is exported by specifying objects and object types for the current export mode.

Note: If you change this parameter in any of the export scenarios in this chapter, the export will not be successful.

LOGFILE

osm_expdp_model.log

Specifies the name and, optionally, the directory for the log file of the export job.

PARALLEL

1

Specifies the maximum number of processes of active execution operating on behalf of the export job.

CLUSTER

Y

This parameter is not available in the generated PAR file. This parameter determines whether Data Pump can use Oracle Real Application Cluster's (Oracle RAC) resources and start workers on other Oracle RAC instances. To force Data Pump Import to use only the instance where the job is started, add CLUSTER=N in the PAR file. Otherwise, ignore this parameter. In Oracle database for import, the default value is Y.

About Import Parameters

You can add parameters in-line to data pump import command (impdp).

Table 11-4 describes the parameters you can use when impdp.

For more information about the parameters that are available in the command line mode of the data pump import, see "Oracle Data Pump Import" in Oracle Database Utilities Guide.

Table 11-4 Import In-line Parameters

PAR File Parameter Description

DIRECTORY

Specifies the directory to which import finds the dump file and the log file generated by the export.

Note: For imports performed in an Oracle RAC environment using Automatic Storage Management, change this parameter to point to the shared location. For more information, see the Data Pump Oracle RAC Considerations section of the Oracle Database documentation.

Note: If you are using a pluggable database (PDB) you cannot use the default DATA_PUMP_DIR directory. You must specify a different variable and directory for the PDB instance. Each PDB instance that you want to use impdp with must have its own data pump directory.

The following sqlplus commands create and set permissions on a new variable PDB directory variable:

create directory pdb_variable_name as 'path';
grant read, write on  directory  pdb_variable_name to osm_core_schema;

where:

  • pdb_variable_name is the name of the PDB directory variable

  • path is the path to the data pump directory (for example /samplepath/pdbdatapumpdir)

  • osm_core_schema is the core OSM schema (for example ordermgmt)

DUMPFILE

Specifies the name and, optionally, the directory of objects of dump files for an import job.

LOGFILE

Specifies the name and, optionally, the directory for the log file of the import job.

TRANSFORM

Specifies whether the types being created should be assigned a new object identifier (OID). For example:

TRANSFORM=oid:n

REMAP_SCHEMA

This parameter specifies the source schema and the target schema from which all objects are loaded to.

When importing, if the source and target schema are the same, the REMAP_SCHEMA option does not need to be specified.

REMAP_TABLESPACE

This parameter remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace.

When importing, if the source and target schema are the same, the REMAP_TABLESPACE option does not need to be specified.

CLUSTER

This parameter is not available in the generated PAR file. This parameter determines whether Data Pump can use Oracle Real Application Cluster's (Oracle RAC) resources and start workers on other Oracle RAC instances. To force Data Pump Import to use only the instance where the job is started, add CLUSTER=N in the PAR file. Otherwise, ignore this parameter. In Oracle database for import, the default value is Y.

Troubleshooting Export/Import

Errors might occur during the process of exporting or importing data.

Table 11-5 lists some common export errors and their solutions. For more information about troubleshooting errors that might occur when exporting or importing data, see the troubleshooting section of the knowledge article about exporting and importing data [Doc ID 1594152.1], available from the Oracle support website:

https://support.oracle.com

For information about Oracle RAC and data pump import, see "Oracle RAC Considerations" and "Oracle Data Pump Import" in Oracle Database Utilities.

For information about predefined roles in an Oracle Database installation, and about guidelines for securing user and accounts privileges, see Oracle Database Security Guide.

Table 11-5 Common Export Errors

Error Cause Solution

ORA-39097: Data Pump job encountered unexpected error -12801

ORA-39065: unexpected master process exception in MAIN

ORA-12801: error signaled in parallel query server PZ99, instance <instanceDetails> (4)

ORA-01460: unimplemented or unreasonable conversion requested

There is an issue with the Oracle Data Pump export tool and Oracle RAC databases.

For more information, see the knowledge article about the issue [Doc ID 13099577.8], available from the Oracle support website:

https://support.oracle.com

Update the following database parameters:

ALTER SYSTEM set parallel_force_local=true
 
ALTER SYSTEM set parallel_max_servers=0
 
ALTER SYSTEM set parallel_min_servers=0

UDE-00014: invalid value for parameter, 'include'

The include parameter used in the export options PAR file contains more than 4,000 characters. This is normally due to extra white space at the end of each line when the file is spooled.

As a workaround, execute the following command in SQL Plus before generating the options PAR files:

SQL> SET TRIMSPOOL ON

ORA-39002: invalid operation

ORA-39070: Unable to open the log file.

ORA-29283: invalid file operation

ORA-06512: at "SYS.UTL_FILE", line 536

ORA-29283: invalid file operation

The location specified for the export DIRECTORY option is not accessible.

Update the DIRECTORY option specified in the export command.

For exports or imports performed in an Oracle RAC environment using Automatic Storage Management, the DIRECTORY option should be updated to point to the shared location.

For more information, see the Data Pump: Oracle RAC Considerations section of Oracle Database documentation.

ORA-39001: invalid argument value

ORA-39000: bad dump file specification

ORA-31641: unable to create dump file "+DATA/osm_expdp_model.dmp"

ORA-17502: ksfdcre:4 Failed to create file +DATA/osm_expdp_model.dmp

ORA-15005: name "osm_expdp_model.dmp" is already used by an existing alias

A previously generated version of the dmp file already exists.

Remove the previously generated version of the dmp file.

or

Specify the following option in the export PAR file to overwrite it:

REUSE_DUMPFILES=YES

ORA-31693: Table data object "<OSMCoreSchemaUserName>"."OM_ORDER_HEADER":"P_

000000000000005001"."SYS_SUBP612617" failed to load/unload and is being skipped due to error:

ORA-06502: PL/SQL: numeric or value error: character string buffer too small

ORA-01403: no data found

ORA-01403: no data found

The data pump export job name is too long.

If no job name is specified when doing the export, the job name is automatically generated based on the schema name. If the schema name is long the job name can exceed the limit of 26 characters and causes this error.

For more information, see the knowledge article about the issue [Doc ID 1502119.1], available from the Oracle support website:

https://support.oracle.com

Specify the JOB_NAME option in the export PAR file. Make sure the value specified for the option is less than 26 characters long.

JOB_NAME=osm_expdp_job 

Table 11-6 lists some common import errors and their solutions.

Table 11-6 Common Import Errors

Error Cause Solution

ORA-31693: Table data object "OSMCoreSchemaUserName"."OM_ORCH_PLAN":"SYS_SUBP44607" failed to load/unload and is being skipped due to error:

ORA-29913: error in executing ODCIEXTTABLEOPEN callout

ORA-29400: data cartridge error

ORA-39779: type "<OSMCoreSchemaUserName>"."OM_T_ORCH_PROCESS" not found or conversion to latest version is not possible

The types being created should be assigned a new object identifier (OID).

For more information, see the knowledge article about the issue [Doc ID 351519.1], available from the Oracle support website:

https://support.oracle.com

Make sure the following option is specified when importing. This option assigns new OIDs.

TRANSFORM=oid:n

For more information, see the Data Pump: Import TRANSFORM section of Oracle Database documentation.

ORA-31693: Table data object "<OSMCoreSchemaUserName>"."<OSMTableName>":"SYS_P44941" failed to load/unload and is being skipped due to error:

ORA-29913: error in executing ODCIEXTTABLEFETCH callout

ORA-14400: inserted partition key does not map to any partition

The order IDs being imported are greater than the greatest partition upper bound.

Resolve this error by adding partitions.

For more information, see "Adding Partitions".

ORA-31693: Table data object "<OSMCoreSchemaUserName>"."<OSMTableName>" failed to load/unload and is being skipped due to error:

ORA-00001: unique constraint (<OSMCoreSchemaUserName>.<OSMTableName>) violated

The table already contains the data that is being imported.

Before you import, clean up the table using one or more of the following purge commands:

SQL> exec om_exp_imp_util_pkg.purge_orders

SQL> exec om_exp_imp_util_pkg.purge_model

SQL> exec om_exp_imp_util_pkg.purge_schema

ORA-39001: invalid argument value

ORA-39046: Metadata remap REMAP_TABLESPACE has already been specified.

The same source tablespace has been specified more than once for the REMAP_TABLESPACE option.

This might happen when the OSM Core and Rule Engine schema use the same tablespace. In this case, you need to specify the REMAP_TABLESPACE for this tablespace only once.

ORA-00932: inconsistent datatypes: expected OM_T_ORCH_PROCESS got OM_T_ORCH_PROCESS

There is a known issue with data pump import that causes imports with REMAP_SCHEMA and TYPE definitions to generate this error.

Follow the steps outlined in the scenario "Exporting and Importing a Range of Orders and the OSM Model". When generating the order PAR files, select the option to export all orders, that is, order_target_seq_id > 0.

UDI-31626: operation generated ORACLE error 31626

ORA-31626: job does not exist

ORA-39086: cannot retrieve job information

ORA-06512: at "SYS.DBMS_DATAPUMP", line 3326

ORA-06512: at "SYS.DBMS_DATAPUMP", line 4551

ORA-06512: at line 1

This is an issue with data pump import privileges.

For more information, see the knowledge article about the issue [Doc ID 1459430.1], available from the Oracle support website:

https://support.oracle.com

Apply Patch 13715680

Or

Follow the workaround in the notes of the associated bug to add the missing privileges. For more information, see bug 13715680 on the Oracle support website.

https://support.oracle.com

The missing privileges are:

SQL> GRANT lock any table TO datapump_imp_full_database;
SQL> GRANT alter any index TO datapump_imp_full_database;

ORA-31693: Table data object "<InvalidSourceOSMCoreSchemaUserName>"."OM_SQL_LOG" failed to load/unload and is being skipped due to error:

ORA-00001: unique constraint (<InvalidSourceOSMCoreSchemaUserName>.XPKOM_SQL_LOG) violated

ORA-31693: Table data object "<InvalidSourceOSMCoreSchemaUserName>"."OM_MODEL_CLOB" failed to load/unload and is being skipped due to error:

ORA-29913: error in executing ODCIEXTTABLEFETCH callout

ORA-00001: unique constraint (<InvalidSourceOSMCoreSchemaUserName>.XPKOM_MODEL_CLOB) violated

Import fails if the wrong source schema name is specified.

If the wrong source schema user name is specified in the REMAP_SCHEMA option, data pump tries to import the data to the actual source schema. If the source schema exists in the target instance, the constraint violations are shown.

Verify that the correct source OSM core schema user name is specified in the import command:

impdp TargetOSMCoreSchemaUserName DIRECTORY=DATA_PUMP_DIR DUMPFILE=osm_expdp_orders%U.dmp LOGFILE=osm_impdp_orders.log REMAP_SCHEMA=SourceOSMCoreSchemaUserName:TargetOSMCoreSchemaUserName REMAP_TABLESPACE=SourceOSMTablespace:TargetOSMTablespace TRANSFORM=oid:n

ORA-31631: privileges are required

ORA-39122: Unprivileged users may not perform REMAP_SCHEMA remappings.

The schema user being used for import does not have the imp_full_database role.

This role is required for imports only if you cannot use the system user because of the REMAP_SCHEMA bug.

For more information, see the knowledge article about the issue [Doc ID 1367290.1], available from the My Oracle support website:

https://support.oracle.com

REMAP_SCHEMA is not required if you do the import on a different database instance and the schema user is unchanged (uses the same ID and permissions as the source user).

Grant imp_full_database role to the OSM core schema user. The following command grants this role:

SQL> GRANT imp_full_database TO <OSMCoreSchemaUserName>;

Caution: This role provides a user with access to any data in any schema in the database. Use caution when granting this role to users.

For more information, see "Predefined Roles in an Oracle Database Installation" in Oracle Database Security Guide.

Oracle recommends removing this role after the import is complete. For more information, see "Guidelines for Securing User Accounts and Privileges" in Oracle Database Security Guide.

The following command removes this role:

SQL> REVOKE imp_full_database FROM <OSMCoreSchemaUserName>;