Running Multidimensional Warehouse Setup Jobs

This chapter provides prerequisites to running MDW setup jobs and discusses how to:

Click to jump to parent topicPrerequisites

The following DataStage folders contain CRM, FMS, HCM, SCM, and CS Warehouse setup jobs:

Because each of these sub-folders contain jobs corresponding to all functional warehouses, you must identify which jobs relate to your warehouse and delete the unwanted jobs. You can also create your own master sequencer, which you can use to drag and drop only those jobs relating to your warehouse and then run the master sequencer.

Click to jump to parent topicRunning Customer Relationship Management (CRM) Warehouse Setup Jobs

This section discusses how to run all ETL setup jobs required to implement the CRM Warehouse, in the following order:

  1. CRM - OWS jobs.

  2. Global dimension jobs for CRM.

  3. Local dimension jobs for CRM.

  4. SKU jobs for CRM.

  5. Global - OWE jobs for CRM.

  6. CRM - OWE jobs.

Click to jump to top of pageClick to jump to parent topicRunning CRM - OWS Jobs

The first step in implementing the CRM Warehouse is to run the CRM - OWS jobs. These jobs consist of CRM-specific hash file jobs and OWS jobs. Run the hash file jobs first, as the tables that they load are required to run your standard OWS jobs.

As with most prepackaged jobs, you can use the Master Run Utility to automatically run a set of jobs located in a flat file on the DataStage Server. When you use the Master Run Utility, it reads a list of jobs that are present in a specified flat file and triggers the jobs to run in serial mode, using the dependency logic specified in the Input flat file.

See Using the Master Run Utility to Automatically Run Your ETL Jobs.

CRM - OWS Hash File Jobs

Perform the following steps to run the CRM - OWS hash file jobs:

  1. In DataStage Director, navigate to the hash file jobs by expanding the nodes in the left navigation panel using the following path: CRM_E, OWS, Base, Load_Hash_Files, Server.

  2. Select the hash file jobs in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Note. For Enterprise One sources, run the jobs from the OWS_E1, Base, Load_Hash_Files, Server folder.

CRM - OWS Jobs

Perform the following steps to run the CRM - OWS jobs:

  1. In DataStage Director, navigate to the CRM - OWS jobs by expanding the nodes in the left navigation panel using the following path: CRM_E, OWS, Base, Load_Tables, Sequence.

  2. Select the CRM - OWS jobs in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning Global Dimension Jobs for CRM

The second step in implementing the CRM Warehouse is to run the global dimension jobs for CRM. These jobs consist of global dimension hash file jobs and global dimension jobs. Run the hash file jobs first, as the tables that they load are required to run your standard global dimension jobs.

Note. You can run global dimension jobs individually or together using the master sequence job.

Global Dimension Hash File Jobs

Perform the following steps to run the global dimension hash file jobs individually:

  1. In DataStage Director, navigate to the global dimension hash file jobs by expanding the nodes in the left navigation panel using the following path: Global_Dimensions_E, OWS_To_MDW, Base, Load_Hash_Files, Server.

  2. Select each global dimension hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Perform the following steps to run the global dimension hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: Global_Dimensions_E, Master_Sequence.

  2. Select the global dimension master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Global Dimension Jobs

Perform the following steps to run the global dimension jobs individually:

  1. In DataStage Director, navigate to the global dimension hash file jobs by expanding the nodes in the left navigation panel using the following path: Global_Dimensions_E, OWS_To_MDW, Base, Load_Hash_Files, Sequence.

  2. Select a global dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Perform the following steps to run the global dimension jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: Global_Dimensions_E, Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning Local Dimension Jobs for CRM

The third step in implementing the CRM Warehouse is to run the local dimension jobs for CRM. These jobs consist of local dimension hash file jobs and local dimension jobs. Run the hash file jobs first, as the tables that they load are required to run your standard global dimension jobs.

Note. You can run local dimension jobs individually or together using the master sequence job.

Local Dimension Hash File Jobs

Perform the following steps to run the local dimension hash file jobs individually:

  1. In DataStage Director, navigate to the global dimension hash file jobs by expanding the nodes in the left navigation panel using the following path: Local_Dimensions, OWS_To_MDW, Base, Load_Hash_Files, Server.

  2. Select each local dimension hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Perform the following steps to run the local dimension hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: CRM_E, Local_Dimensions, Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Local Dimension Jobs

Perform the following steps to run the local dimension jobs individually:

  1. In DataStage Director, navigate to the local dimension jobs by expanding the nodes in the left navigation panel using the following path: CRM_E, Local_Dimensions, OWS_To_MDW, Base, Load_Tables, Sequence.

  2. Select each local dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Perform the following steps to run the local dimension jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: CRM_E, Local_Dimensions, Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning CRM SKU Jobs

The fourth step in implementing the CRM Warehouse is to run the CRM SKU jobs. These jobs consist of hash file jobs, dimension jobs, and fact jobs. Run the hash file jobs first, as the tables that they load are required to run your dimension and fact jobs.

Note. You can to run CRM SKU jobs individually or together using the master sequence job.

CRM SKU Hash File Jobs

Perform the following steps to run the CRM SKU hash file jobs individually:

  1. In DataStage Director, navigate to the CRM SKU hash file jobs by expanding the nodes in the left navigation panel using the following path: CRM_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Dimensions, Base, Load_Hash_Files, Server.

  2. Select each hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the CRM SKU hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: CRM_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

CRM Dimension Jobs

Perform the following steps to run the CRM dimension jobs individually:

  1. In DataStage Director, navigate to the CRM dimension jobs by expanding the nodes in the left navigation panel using the following path: CRM_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Dimensions, Base, Load_Tables, Sequence.

  2. Select each dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the CRM dimension jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: CRM_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

CRM Fact Jobs

Perform the following steps to run the CRM fact jobs individually:

  1. In DataStage Director, navigate to the CRM fact jobs by expanding the nodes in the left navigation panel using the following path: CRM_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Facts, Base, Load_Tables, Sequence.

  2. Select each fact job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Perform the following steps to run the fact jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: CRM_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning Global - OWE Jobs for CRM

The fifth step in implementing the CRM Warehouse is to run the Global - OWE jobs for CRM. These jobs consist of CRM Global - OWE hash file jobs and standard Global - OWE jobs. Run the hash file jobs first, as the tables that they load are required to run your standard CRM Global - OWE jobs.

Global - OWE Hash File Jobs

Perform the following steps to run the Global - OWE hash file jobs individually:

  1. In DataStage Director, navigate to the Global - OWE hash file jobs by expanding the nodes in the left navigation panel using the following path: OWE_E, Global_D00, Base, Load_Hash_Files, Server.

  2. Select each Global - OWE hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Global - OWE Dimension Jobs

Perform the following steps to run the Global - OWE dimension jobs individually:

  1. In DataStage Director, navigate to the Global - OWE dimension jobs by expanding the nodes in the left navigation panel using the following path: OWE_E, Global_D00, Base, Load_Tables, Sequence.

  2. Select each Global - OWE dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run..

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning CRM - OWE Jobs

The final step in implementing the CRM Warehouse is to run the CRM - OWE jobs. These jobs consist of CRM - OWE hash file jobs and standard CRM - OWE jobs. Run the hash file jobs first, as the tables that they load are required to run your standard CRM - OWE jobs.

CRM - OWE Hash File Jobs

Perform the following steps to run the CRM - OWE hash file jobs individually:

  1. In DataStage Director, navigate to the CRM - OWE hash file jobs by expanding the nodes in the left navigation panel using the following path: OWE_E, CRM, Base, Load_Hash_Files, Server.

  2. Select the each CRM - OWE hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

CRM - OWE Jobs

Perform the following steps to run the CRM - OWE jobs individually:

  1. In DataStage Director, navigate to the CRM - OWE jobs by expanding the nodes in the left navigation panel using the following path: OWE_E, CRM, Base, Load_Tables, Sequence.

  2. Select each CRM - OWE job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to parent topicRunning Financial Management Solutions (FMS) Warehouse Setup Jobs

This section discusses how to run all ETL setup jobs required to implement the FMS Warehouse, in the following order:

  1. FMS - OWS jobs.

  2. Global dimension jobs for FMS.

  3. Local dimension jobs for FMS.

  4. FMS - SKU jobs.

  5. Global - OWE jobs for FMS.

  6. FMS - OWE jobs.

Click to jump to top of pageClick to jump to parent topicRunning FMS - OWS Jobs

The first step in implementing the FMS Warehouse is to run the FMS - OWS jobs. These jobs consist of FMS-specific hash file jobs and OWS jobs. Run the hash file jobs first, as the tables that they load are required to run your standard OWS jobs.

As with most prepackaged jobs, you can use the Master Run Utility to automatically run a set of jobs located in a flat file on the DataStage Server. When you use the Master Run Utility, it reads a list of jobs that are present in a specified flat file and triggers the jobs to run in serial mode, using the dependency logic specified in the Input flat file.

See Using the Master Run Utility to Automatically Run Your ETL Jobs.

FMS - OWS Hash File Jobs

Perform the following steps to run the FMS - OWS hash file jobs:

  1. In DataStage Director, navigate to the hash file jobs by expanding the nodes in the left navigation panel using the following path: FMS_E, OWS, Base, Load_Hash_Files, Server.

  2. Select each FMS - OWS hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Note. For Enterprise One sources, run the jobs from the OWS_E1, Base, Load_Hash_Files, Sequence folder.

FMS - OWS Jobs

Perform the following steps to run the FMS - OWS jobs:

  1. In DataStage Director, navigate to the FMS - OWS jobs by expanding the nodes in the left navigation panel using the following path: FMS_E, OWS, Base, Load_Tables, Sequence.

  2. Select each FMS - OWS job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning Global Dimension Jobs for FMS

The second step in implementing the FMS Warehouse is to run the global dimension jobs for FMS. These jobs consist of global dimension hash file jobs and global dimension jobs. Run the hash file jobs first, as the tables that they load are required to run your standard global dimension jobs.

Note. The steps required to run Global Dimension jobs for FMS are exactly the same as the steps described in the CRM Global Dimensions section earlier in this chapter. Refer to the Global Dimensions for CRM steps for information on how to run Global Dimension jobs for FMS.

See Running Global Dimension Jobs for CRM.

Click to jump to top of pageClick to jump to parent topicRunning Local Dimension Jobs for FMS

The third step in implementing the FMS Warehouse is to run the local dimension jobs for FMS. These jobs consist of local dimension hash file jobs and local dimension jobs. Run the hash file jobs first, as the tables that they load are required to run your standard global dimension jobs.

Note. You can run local dimension jobs individually or together using the master sequence job.

Local Dimension Hash File Jobs

Perform the following steps to run the local dimension hash file jobs individually:

  1. In DataStage Director, navigate to the global dimension hash file jobs by expanding the nodes in the left navigation panel using the following path: Local_Dimensions, OWS_To_MDW, Base, Load_Hash_Files, Server.

  2. Select each local dimension hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Perform the following steps to run the local dimension hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: FMS_E, Local_Dimensions, Master_Sequence.

    Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  2. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Local Dimension Jobs

Perform the following steps to run the local dimension jobs individually:

  1. In DataStage Director, navigate to the local dimension jobs by expanding the nodes in the left navigation panel using the following path: FMS_E, Local_Dimensions, OWS_To_MDW, Base, Load_Tables, Sequence.

  2. Select each local dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the local dimension jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: FMS_E, Local_Dimensions, Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning FMS SKU Jobs

The fourth step in implementing the FMS Warehouse is to run the FMS SKU jobs. These jobs consist of hash file jobs, dimension jobs, and fact jobs. Run the hash file jobs first, as the tables that they load are required to run your dimension and fact jobs.

Note. You can run FMS SKU jobs individually or together using the master sequence job.

FMS SKU Hash File Jobs

Perform the following steps to run the FMS SKU hash file jobs individually:

  1. In DataStage Director, navigate to the FMS SKU hash file jobs by expanding the nodes in the left navigation panel using the following path: FMS_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Dimensions, Base, Load_Hash_Files, Server.

  2. Select each hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Perform the following steps to run the FMS SKU hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: FMS_E, [SKU/Data Mart Name], [Business Process], Master_Sequence..

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

FMS Dimension Jobs

Perform the following steps to run the FMS dimension jobs individually:

  1. In DataStage Director, navigate to the FMS dimension jobs by expanding the nodes in the left navigation panel using the following path: FMS_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Dimensions, Base, Load_Tables, Sequence.

  2. Select each dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Perform the following steps to run the FMS dimension jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: FMS_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

FMS Fact Jobs

Perform the following steps to run the FMS fact jobs individually:

  1. In DataStage Director, navigate to the FMS fact jobs by expanding the nodes in the left navigation panel using the following path: FMS_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Facts, Base, Load_Tables, Sequence.

  2. Select each fact job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the fact jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: FMS_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning Global - OWE Jobs for FMS

The fifth step in implementing the FMS Warehouse is to run the Global - OWE jobs for FMS. These jobs consist of FMS Global - OWE hash file jobs and standard Global - OWE jobs. Run the hash file jobs first, as the tables that they load are required to run your standard FMS Global - OWE jobs.

Note. The steps required to run Global - OWE jobs for FMS are exactly the same as the steps described in the Global - OWE for CRM section earlier in this chapter. Refer to the Global - OWE for CRM steps for information on how to run Global - OWE jobs for FMS.

See Running Global - OWE Jobs for CRM.

Click to jump to top of pageClick to jump to parent topicRunning FMS - OWE Jobs

The final step in implementing the FMS Warehouse is to run the FMS - OWE jobs. These jobs consist of FMS - OWE hash file jobs and standard FMS - OWE jobs. Run the hash file jobs first, as the tables that they load are required to run your standard FMS - OWE jobs.

FMS - OWE Hash File Jobs

Perform the following steps to run the FMS - OWE hash file jobs individually:

  1. In DataStage Director, navigate to the FMS - OWE hash file jobs by expanding the nodes in the left navigation panel using the following path: OWE_E, FMS, Base, Load_Hash_Files, Server.

  2. Select the each FMS - OWE hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

FMS - OWE Jobs

Perform the following steps to run the FMS - OWE jobs individually:

  1. In DataStage Director, navigate to the FMS - OWE jobs by expanding the nodes in the left navigation panel using the following path: OWE_E, FMS, Base, Load_Tables, Sequence.

  2. Select each FMS - OWE job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Click to jump to parent topicRunning Human Capital Management (HCM) Warehouse Setup Jobs

This section discusses how to run all ETL setup jobs required to implement the HCM Warehouse, in the following order:

  1. HCM - OWS jobs.

  2. Global dimension jobs for HCM.

  3. Local dimension jobs for HCM.

  4. HCM - SKU jobs.

  5. Global - OWE jobs for HCM.

  6. HCM - OWE jobs.

Click to jump to top of pageClick to jump to parent topicRunning HCM - OWS Jobs

The first step in implementing the HCM Warehouse is to run the HCM - OWS jobs. These jobs consist of HCM-specific hash file jobs and OWS jobs. Run the hash file jobs first, as the tables that they load are required to run your standard OWS jobs.

As with most prepackaged jobs, you can use the Master Run Utility to automatically run a set of jobs located in a flat file on the DataStage Server. When you use the Master Run Utility, it reads a list of jobs that are present in a specified flat file and triggers the jobs to run in serial mode, using the dependency logic specified in the Input flat file.

See Using the Master Run Utility to Automatically Run Your ETL Jobs.

HCM - OWS Hash File Jobs

Perform the following steps to run the HCM - OWS hash file jobs:

  1. In DataStage Director, navigate to the hash file jobs by expanding the nodes in the left navigation panel using the following path: HCM_E, OWS, Base, Load_Hash_Files, Server.

  2. Select each HCM - OWS hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Note. For Enterprise One sources, run the jobs from the OWS_E1, Base, Load_Hash_Files, Sequence folder.

HCM - OWS Jobs

Perform the following steps to run the HCM - OWS jobs:

  1. In DataStage Director, navigate to the HCM - OWS jobs by expanding the nodes in the left navigation panel using the following path: HCM_E, OWS, Base, Load_Tables, Sequence.

  2. Select each HCM - OWS job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning Global Dimension Jobs for HCM

The second step in implementing the HCM Warehouse is to run the global dimension jobs for HCM. These jobs consist of global dimension hash file jobs and global dimension jobs. Run the hash file jobs first, as the tables that they load are required to run your standard global dimension jobs.

Note. The steps required to run Global Dimension jobs for HCM are exactly the same as the steps described in the CRM Global Dimensions section earlier in this chapter. Refer to the Global Dimensions for CRM steps for information on how to run Global Dimension jobs for HCM.

See Running Global Dimension Jobs for CRM.

Click to jump to top of pageClick to jump to parent topicRunning Local Dimension Jobs for HCM

The third step in implementing the HCM Warehouse is to run the local dimension jobs for HCM. These jobs consist of local dimension hash file jobs and local dimension jobs. Run the hash file jobs first, as the tables that they load are required to run your standard global dimension jobs.

Note. You can run local dimension jobs individually or together using the master sequence job.

Local Dimension Hash File Jobs

Perform the following steps to run the local dimension hash file jobs individually:

  1. In DataStage Director, navigate to the global dimension hash file jobs by expanding the nodes in the left navigation panel using the following path: Local_Dimensions, OWS_To_MDW, Base, Load_Hash_Files, Server.

  2. Select each local dimension hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the local dimension hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: HCM_E, Local_Dimensions, Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Local Dimension Jobs

Perform the following steps to run the local dimension jobs individually:

  1. In DataStage Director, navigate to the local dimension jobs by expanding the nodes in the left navigation panel using the following path: HCM_E, Local_Dimensions, OWS_To_MDW, Base, Load_Tables, Sequence.

  2. Select each local dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Perform the following steps to run the local dimension jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: HCM_E, Local_Dimensions, Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning HCM SKU Jobs

The fourth step in implementing the HCM Warehouse is to run the HCM SKU jobs. These jobs consist of hash file jobs, dimension jobs, and fact jobs. Run the hash file jobs first, as the tables that they load are required to run your dimension and fact jobs.

Note. You can run HCM SKU jobs individually or together using the master sequence job.

HCM SKU Hash File Jobs

Perform the following steps to run the HCM SKU hash file jobs individually:

  1. In DataStage Director, navigate to the HCM SKU hash file jobs by expanding the nodes in the left navigation panel using the following path: HCM_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Dimensions, Base, Load_Hash_Files, Server.

  2. Select each hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the HCM SKU hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: HCM_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

HCM Dimension Jobs

Perform the following steps to run the HCM dimension jobs individually:

  1. In DataStage Director, navigate to the HCM dimension jobs by expanding the nodes in the left navigation panel using the following path: HCM_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Dimensions, Base, Load_Tables, Sequence.

  2. Select each dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Perform the following steps to run the HCM dimension jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: HCM_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

HCM Fact Jobs

Perform the following steps to run the HCM fact jobs individually:

  1. In DataStage Director, navigate to the HCM fact jobs by expanding the nodes in the left navigation panel using the following path: HCM_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Facts, Base, Load_Tables, Sequence.

  2. Select each fact job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the fact jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: HCM_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning Global - OWE Jobs for HCM

The fifth step in implementing the HCM Warehouse is to run the Global - OWE jobs for HCM. These jobs consist of HCM Global - OWE hash file jobs and standard Global - OWE jobs. Run the hash file jobs first, as the tables that they load are required to run your standard HCM Global - OWE jobs.

Note. The steps required to run Global - OWE jobs for HCM are exactly the same as the steps described in the Global - OWE for CRM section earlier in this chapter. Refer to the Global - OWE for CRM steps for information on how to run Global - OWE jobs for HCM.

See Running Global - OWE Jobs for CRM.

Click to jump to top of pageClick to jump to parent topicRunning HCM - OWE Jobs

The final step in implementing the HCM Warehouse is to run the HCM - OWE jobs. These jobs consist of HCM - OWE hash file jobs and standard HCM - OWE jobs. Run the hash file jobs first, as the tables that they load are required to run your standard HCM - OWE jobs.

HCM - OWE Hash File Jobs

Perform the following steps to run the HCM - OWE hash file jobs individually:

  1. In DataStage Director, navigate to the HCM - OWE hash file jobs by expanding the nodes in the left navigation panel using the following path: OWE_E, HCM, Base, Load_Hash_Files, Server.

  2. Select the each HCM - OWE hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

HCM - OWE Jobs

Perform the following steps to run the HCM - OWE jobs individually:

  1. In DataStage Director, navigate to the HCM - OWE jobs by expanding the nodes in the left navigation panel using the following path: OWE_E, HCM, Base, Load_Tables, Sequence.

  2. Select each HCM - OWE job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to parent topicRunning Supply Chain Management (SCM) Warehouse Setup Jobs

This section discusses how to run all ETL setup jobs required to implement the SCM Warehouse, in the following order:

  1. SCM - OWS jobs.

  2. Global dimension jobs for SCM.

  3. Local dimension jobs for SCM.

  4. SCM - SKU jobs.

  5. Global - OWE jobs for SCM.

  6. SCM - OWE jobs.

Click to jump to top of pageClick to jump to parent topicRunning SCM - OWS Jobs

The first step in implementing the SCM Warehouse is to run the SCM - OWS jobs. These jobs consist of SCM-specific hash file jobs and OWS jobs. Run the hash file jobs first, as the tables that they load are required to run your standard OWS jobs.

As with most prepackaged jobs, you can use the Master Run Utility to automatically run a set of jobs located in a flat file on the DataStage Server. When you use the Master Run Utility, it reads a list of jobs that are present in a specified flat file and triggers the jobs to run in serial mode, using the dependency logic specified in the Input flat file.

See Using the Master Run Utility to Automatically Run Your ETL Jobs.

SCM - OWS Hash File Jobs

Perform the following steps to run the SCM - OWS hash file jobs:

  1. In DataStage Director, navigate to the hash file jobs by expanding the nodes in the left navigation panel using the following path: SCM_E, OWS, Base, Load_Hash_Files, Server.

  2. Select each SCM - OWS hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Note. For Enterprise One sources, run the jobs from the OWS_E1, Base, Load_Hash_Files, Sequence folder.

SCM - OWS Jobs

Perform the following steps to run the SCM - OWS jobs:

  1. In DataStage Director, navigate to the SCM - OWS jobs by expanding the nodes in the left navigation panel using the following path: SCM_E, OWS, Base, Load_Tables, Sequence.

  2. Select each SCM - OWS job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning Global Dimension Jobs for SCM

The second step in implementing the SCM Warehouse is to run the global dimension jobs for SCM. These jobs consist of global dimension hash file jobs and global dimension jobs. Run the hash file jobs first, as the tables that they load are required to run your standard global dimension jobs.

Note. The steps required to run Global Dimension jobs for SCM are exactly the same as the steps described in the CRM Global Dimensions section earlier in this chapter. Refer to the Global Dimensions for CRM steps for information on how to run Global Dimension jobs for SCM.

See Running Global Dimension Jobs for CRM.

Click to jump to top of pageClick to jump to parent topicRunning Local Dimension Jobs for SCM

The third step in implementing the SCM Warehouse is to run the local dimension jobs for SCM. These jobs consist of local dimension hash file jobs and local dimension jobs. Run the hash file jobs first, as the tables they load are required to run your standard global dimension jobs.

Note. You can run local dimension jobs individually or together using the master sequence job.

Local Dimension Hash File Jobs

Perform the following steps to run the local dimension hash file jobs individually:

  1. In DataStage Director, navigate to the global dimension hash file jobs by expanding the nodes in the left navigation panel using the following path: Local_Dimensions, OWS_To_MDW, Base, Load_Hash_Files, Server.

  2. Select each local dimension hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the local dimension hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: SCM_E, Local_Dimensions, Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Local Dimension Jobs

Perform the following steps to run the local dimension jobs individually:

  1. In DataStage Director, navigate to the local dimension jobs by expanding the nodes in the left navigation panel using the following path: SCM_E, Local_Dimensions, OWS_To_MDW, Base, Load_Tables, Sequence.

  2. Select each local dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the local dimension jobs together, using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: SCM_E, Local_Dimensions, Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning SCM SKU Jobs

The fourth step in implementing the SCM Warehouse is to run the SCM SKU jobs. These jobs consist of hash file jobs, dimension jobs, and fact jobs. Run the hash file jobs first, as the tables that they load are required to run your dimension and fact jobs.

Note. You can run SCM SKU jobs individually or together using the master sequence job.

SCM SKU Hash File Jobs

Perform the following steps to run the SCM SKU hash file jobs individually:

  1. In DataStage Director, navigate to the SCM SKU hash file jobs by expanding the nodes in the left navigation panel using the following path: SCM_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Dimensions, Base, Load_Hash_Files, Server.

  2. Select each hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the SCM SKU hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: SCM_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

SCM Dimension Jobs

Perform the following steps to run the SCM dimension jobs individually:

  1. In DataStage Director, navigate to the SCM dimension jobs by expanding the nodes in the left navigation panel using the following path: SCM_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Dimensions, Base, Load_Tables, Sequence.

  2. Select each dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the SCM dimension jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: SCM_E, [SKU/Data Mart Name], [Business Process], Master_Sequence..

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

SCM Fact Jobs

Perform the following steps to run the SCM fact jobs individually:

  1. In DataStage Director, navigate to the SCM fact jobs by expanding the nodes in the left navigation panel using the following path: SCM_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Facts, Base, Load_Tables, Sequence.

  2. Select each fact job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the fact jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: SCM_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning Global - OWE Jobs for SCM

The fifth step in implementing the SCM Warehouse is to run the Global - OWE jobs for SCM. These jobs consist of SCM Global - OWE hash file jobs and standard Global - OWE jobs. Run the hash file jobs first, as the tables that they load are required to run your standard SCM Global - OWE jobs.

Note. The steps required to run Global - OWE jobs for SCM are exactly the same as the steps described in the Global - OWE for CRM section earlier in this chapter. Refer to the Global - OWE for CRM steps for information on how to run Global - OWE jobs for SCM.

See Running Global - OWE Jobs for CRM.

Click to jump to top of pageClick to jump to parent topicRunning SCM - OWE Jobs

The final step in implementing the SCM Warehouse is to run the SCM - OWE jobs. These jobs consist of SCM - OWE hash file jobs and standard SCM - OWE jobs. Run the hash file jobs first, as the tables that they load are required to run your standard SCM - OWE jobs.

SCM - OWE Hash File Jobs

Perform the following steps to run the SCM - OWE hash file jobs individually:

  1. In DataStage Director, navigate to the SCM - OWE hash file jobs by expanding the nodes in the left navigation panel using the following path: OWE_E, SCM, Base, Load_Hash_Files, Server.

  2. Select the each SCM - OWE hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

SCM - OWE Jobs

Perform the following steps to run the SCM - OWE jobs individually:

  1. In DataStage Director, navigate to the SCM - OWE jobs by expanding the nodes in the left navigation panel using the following path: OWE_E, SCM, Base, Load_Tables, Sequence.

  2. Select each SCM - OWE job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to parent topicRunning Campus Solutions Warehouse (CSW) Setup Jobs

This section discusses how to run all ETL setup jobs required to implement the CS Warehouse, in the following order:

  1. CSW - OWS jobs.

  2. Global dimension jobs for CSW.

  3. Local dimension jobs for CSW.

  4. CSW - SKU jobs.

  5. Global - OWE jobs for CSW.

  6. CSW - OWE jobs.

Click to jump to top of pageClick to jump to parent topicRunning CSW - OWS Jobs

The first step in implementing the Campus Solutions Warehouse is to run the CSW - OWS jobs. These jobs consist of CSW-specific hash file jobs and OWS jobs. Run the hash file jobs first, as the tables that they load are required to run your standard OWS jobs.

As with most prepackaged jobs, you can use the Master Run Utility to automatically run a set of jobs located in a flat file on the DataStage Server. When you use the Master Run Utility, it reads a list of jobs that are present in a specified flat file and triggers the jobs to run in serial mode, using the dependency logic specified in the Input flat file.

See Using the Master Run Utility to Automatically Run Your ETL Jobs.

CSW - OWS Hash File Jobs

Perform the following steps to run the CSW - OWS hash file jobs:

  1. In DataStage Director, navigate to the hash file jobs by expanding the nodes in the left navigation panel using the following path: CS_E, OWS, Base, Load_Hash_Files, Server.

  2. Select each CSW - OWS hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Note. For Enterprise One sources, run the jobs from the OWS_E1, Base, Load_Hash_Files, Sequence folder.

CSW - OWS Jobs

Perform the following steps to run the CSW - OWS jobs:

  1. In DataStage Director, navigate to the CSW - OWS jobs by expanding the nodes in the left navigation panel using the following path: CS_E, OWS, Base, Load_Tables, Sequence.

  2. Select each CSW - OWS job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning Global Dimension Jobs for CSW

The second step in implementing the Campus Solutions Warehouse is to run the global dimension jobs for CSW. These jobs consist of global dimension hash file jobs and global dimension jobs. Run the hash file jobs first, as the tables that they load are required to run your standard global dimension jobs.

Note. The steps required to run Global Dimension jobs for CSW are exactly the same as the steps described in the CRM Global Dimensions section earlier in this chapter. Refer to the Global Dimensions for CRM steps for information on how to run Global Dimension jobs for CSW.

See Running Global Dimension Jobs for CRM.

Click to jump to top of pageClick to jump to parent topicRunning Local Dimension Jobs for CSW

The third step in implementing the Campus Solutions Warehouse is to run the local dimension jobs for CSW. These jobs consist of local dimension hash file jobs and local dimension jobs. Run the hash file jobs first, as the tables they load are required to run your standard global dimension jobs.

Note. You can run local dimension jobs individually or together using the master sequence job.

Local Dimension Hash File Jobs

Perform the following steps to run the local dimension hash file jobs individually:

  1. In DataStage Director, navigate to the global dimension hash file jobs by expanding the nodes in the left navigation panel using the following path: CS_E, Local_Dimensions, OWS_To_MDW, Base, Load_Hash_Files, Server.

  2. Select each local dimension hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the local dimension hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: CS_E, Local_Dimensions, Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Local Dimension Jobs

Perform the following steps to run the local dimension jobs individually:

  1. In DataStage Director, navigate to the local dimension jobs by expanding the nodes in the left navigation panel using the following path: CS_E, Local_Dimensions, OWS_To_MDW, Dimensions, Base, Load_Tables, Sequence.

  2. Select each local dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the local dimension jobs together, using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: CS_E, Local_Dimensions, Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

Click to jump to top of pageClick to jump to parent topicRunning CSW SKU Jobs

The fourth step in implementing the Campus Solutions Warehouse is to run the CSW SKU jobs. These jobs consist of hash file jobs, dimension jobs, and fact jobs. Run the hash file jobs first, as the tables that they load are required to run your dimension and fact jobs.

Note. You can run CSW SKU jobs individually or together using the master sequence job.

CSW SKU Hash File Jobs

Perform the following steps to run the CSW SKU hash file jobs individually:

  1. In DataStage Director, navigate to the CSW SKU hash file jobs by expanding the nodes in the left navigation panel using the following path: CS_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Dimensions, Base, Load_Hash_Files, Server.

  2. Select each hash file job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the CSW SKU hash file jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: CS_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

CSW Dimension Jobs

Perform the following steps to run the CSW dimension jobs individually:

  1. In DataStage Director, navigate to the CSW dimension jobs by expanding the nodes in the left navigation panel using the following path: CS_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Dimensions, Base, Load_Tables, Sequence.

  2. Select each dimension job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the CSW dimension jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: CS_E, [SKU/Data Mart Name], [Business Process], Master_Sequence..

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time, and the job’s status is updated to Running.

CSW Fact Jobs

Perform the following steps to run the CSW fact jobs individually:

  1. In DataStage Director, navigate to the CSW fact jobs by expanding the nodes in the left navigation panel using the following path: CS_E, [SKU/Data Mart Name], [Business Process], OWS_To_MDW, Facts, Base, Load_Tables, Sequence.

  2. Select each fact job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.

Perform the following steps to run the fact jobs together using the master sequence job:

  1. In DataStage Director, navigate to the master sequence job by expanding the nodes in the left navigation panel using the following path: CS_E, [SKU/Data Mart Name], [Business Process], Master_Sequence.

  2. Select the master sequence job in the Job Status view and select Job, Run Now... from the menu.

    The Job Run Options box appears.

  3. Update the job parameters if necessary and click Run.

    The job is scheduled to run with the current date and time and the job’s status is updated to Running.