Oracle® Fusion Middleware User's Guide for Oracle Business Intelligence Data Warehouse Administration Console 11g Release 1 (11.1.1) Part Number E14849-06 |
|
|
PDF · Mobi · ePub |
This chapter describes the functionality available in the DAC tabs, menu commands, and right-click commands.
This chapter contains the following main topics:
This section provides a description of the DAC menu bar commands. It includes the following topics:
Table 16-1 provides a description of the File menu commands.
Table 16-1 File Menu Commands
Command | Description |
---|---|
DAC User Management |
Enables users with the Administrator role to create, delete, and inactivate user accounts. For more information, see "Managing DAC User Accounts" and "Creating, Deleting, Inactivating User Accounts". |
Change DAC User Password |
Enables users who are currently logged into DAC to change their password. |
New Source System Container |
Enables you to create a new, empty source system container or to make a copy of an existing container. Note: You cannot make any changes to a predefined source system container. You must make a copy of an existing container in order to make changes to it. For more information about managing source system containers, see "About Source System Containers". For instructions on creating or copying a container, see "Creating or Copying a Source System Container". |
Rename Source System Container |
Enables you to rename a source system container. |
Delete Source System Container |
Enables you to delete an existing source system container. |
Close |
Closes the DAC Client. |
Table 16-2 provides a description of the Views menu commands.
Table 16-2 Views Menu Commands
Command | Description |
---|---|
Design |
The Design view provides access to functionality related to creating and managing subject areas. For more information, see "Design View Tabs". When the Design view is active, the Source System Container drop-down list appears to the right of the View buttons. It enables you to select the source system container that holds the metadata corresponding to a source system. |
Setup |
The Setup View provides access to functionality related to setting up DAC system properties, Informatica services, database connections, email notification, external executors, and patches. For more information, see "Setup View Tabs". |
Execute |
The Execute view provides access to functionality related to setting up, running, monitoring, and scheduling execution plans. For more information, see "Execute View Tabs". |
This section provides a description of the Tools menu commands. It includes the following topics:
This section provides a description of the DAC Repository Management menu commands on the Tools menu.
Enables you to export the DAC metadata, in XML format, based on the source system container, in order to back up the metadata or to reproduce the environment elsewhere. In the Export dialog, you can specify a directory in which to store the XML file or accept the default directory, which is dac\export.
In the Export dialog, you can select the following category options:
Logical. Exports all information contained in the Design view and metadata defined in the Seed Data menu.
Run Time. Exports information about ETL runs and schedules (contained in the Execute view).
System. Exports all information contained in the Setup view, except passwords for servers and database connections.
Update existing records. (Applicable to DAC standalone authentication only) Exports the users, roles, and passwords.
Note: You can move small sets of data, without exporting the entire DAC repository, by using the DAC patching functionality. For more information, see Chapter 12, "Working With DAC Metadata Patches."
Enables you to import the DAC metadata for the source system containers you specify. In the Import dialog, you can specify the following:
Import/Export folder. A directory from which to import the data. The default directory is DAC\export.
Truncate repository tables. Indicates whether you want to truncate the repository tables. If you select this option, the existing metadata is overwritten.
Enable bulk mode. Indicates whether bulk mode is enabled. In bulk mode the imported metadata is inserted into the repository as an array insert. Using this option improves performance. If you do not select this option, DAC upserts records in the repository tables one row at a time.
Update existing records. (Applicable to DAC standalone authentication only) Imports the users, roles, and passwords.
In the Import dialog, you can select the following category options:
Logical. Exports all information contained in the Design view and metadata defined in the Seed Data menu.
Run Time. Exports information about ETL runs and schedules (contained in the Execute view).
System. Exports all information contained in the Setup view, except passwords for servers and database connections.
Update existing records. (Applicable to DAC standalone authentication only) Exports the users, roles, and passwords.
Enables you to generate a DAC repository report based on the following criteria:
Table Row Counts
Object References by Entity
Ownerless Objects
Unreferenced Objects
Dead References
The Clean Up command removes unused referenced objects.
The Upgrade/Merge Wizard enables you to upgrade and merge the content of DAC repositories. For more information, see "Upgrading, Comparing and Merging DAC Repositories".
Enables you to apply a DAC metadata patch to the DAC repository. For more information, see "Working With DAC Metadata Patches".
Enables you to purge completed runs from the run history. The last run cannot be purged.
In the Purging Run History... dialog, the following options are available:
Delete all completed runs. Purges all completed runs except for the last run.
Delete completed runs before specified date. Enables you to select a date before which all runs except the last run will be purged.
Keep run definition. Purges all related information about a run but leaves the run header information.
Enables you to run analyze table commands for all the DAC repository tables.
Enables you to specify which databases will be associated with newly created indexes.
Enables you to access the Repository Audit Trail, which stores information about actions performed on the repository, users, machine IP addresses, and repository timestamps. You can also add user-defined audit trail records.
Enables you to drop all the DAC repository tables. This action deletes all data in the repository.
Enables you to change the DAC repository encryption key. See "Changing the DAC Repository Encryption Key" for instructions.
Table 16-3 provides a description of the DAC Server Management menu commands on the Tools menu.
Table 16-3 DAC Server Management Menu Commands
Command | Description |
---|---|
DAC Server Setup |
Enables you to configure the DAC Server connections and server email settings. This action should be performed on the machine where the DAC Server is running. |
Start DAC Server |
When running the DAC Server in Web mode, allows you to start the DAC Server. |
Restart DAC Server |
When running the DAC Server in Web mode, allows you to restart the DAC Server. |
Stop DAC Server |
When running the DAC Server in Web mode, allows you to stop the DAC Server. |
Table 16-4 provides a description of the ETL Management menu commands on the Tools menu.
Table 16-4 ETL Management Menu Commands
Command | Description |
---|---|
Configure |
Opens the Data Warehouse Configuration wizard, which enables you to create, upgrade and drop data warehouse tables and to create delete triggers. See Chapter 10, "Managing Data Warehouse Schemas," for more information. |
Reset Data Sources |
Clears the refresh dates for all source and target tables. This action forces a full load to occur during the next ETL process. |
This section provides a description of the Seed Data menu commands on the Tools menu.
Actions, Index Actions
Enables you to set up index actions in order to trigger SQL scripts to create or drop indexes. See "Using Actions to Optimize Indexes and Collect Statistics on Tables" for more information.
Actions, Table Actions
Enables you to set up table actions in order to trigger SQL scripts to analyze and truncate tables. See "Using Actions to Optimize Indexes and Collect Statistics on Tables"
Actions, Task Actions
Enables you to set up task actions in order to trigger SQL scripts to perform various actions related to task behavior. See "Using Actions to Optimize Indexes and Collect Statistics on Tables" for more information.
Global External Parameters
Enables you to define a global external parameter. See "Overview of Parameters" and "Defining a Global External Parameter" for more information.
Heuristics
Enables you to define a heuristics rule. See "Using Heuristics to Manage Tasks, Tables and Indexes" for more information.
Logical Data Sources
Enables you to add, edit, or delete logical data sources.
Task Logical Folders
Enables you to add, edit, or delete task logical folders, which are used in the task definition (in the Tasks tab) so that task definitions do not have to be cloned
Task Phases
Enables you to add, edit, or delete task phases, and to assign a priority to the task phase. A task phase is the phase of the ETL process with which a task is associated. DAC uses the task phase primarily for task dependency generation. The DAC Server uses the task phase to prioritize tasks and to generate a summary of the time taken for each of the phases.
Task Physical Folders
Enables you to add, edit, or delete task physical folders, which correspond to the actual folder in the Informatica repository. The task physical folder name is used in the Ordered Tasks subtab of the Execution Plans tab.
Table 16-5 provides a description of the UI Styles menu commands on the tools menu.
Table 16-5 UI Styles Menu Commands
Command | Description |
---|---|
Windows (MFS) |
Changes the user interface to the Windows style. |
UNIX (MOTIF) |
Changes the user interface to the UNIX style. |
Java (METAL) |
Changes the user interface to the Java style. |
The UI Preferences menu command displays the UI Preferences dialog, which contains the following options:
Always Show Last Updated. Displays the Last Updated column in all tabs and subtabs. This column shows the timestamp for when the repository object was last updated.
Always show system columns. Displays system columns in all tabs and subtabs.
Start in Query Mode. Indicates a tab or subtab will start in Query Mode. Select a tab or subtab from the navigation window, and then select the Start in Query Mode check box.
Accessibility Mode Enabled. Enables accessible keyboard shortcuts. For a description of the shortcuts, see "DAC Accessibility Keyboard Options".
Table 16-6 provides a description of the Help menu commands.
Table 16-6 Help Menu Commands
Command | Description |
---|---|
Login Details |
Provides login and connectivity information for the current session. |
System Information |
Provides system information for the DAC installation. |
DAC Help |
Opens the DAC online help. |
About DAC |
Provides DAC build, schema version, and repository version information. |
Table 16-7 describes the commands available in the top pane toolbar that are common to all views.
Table 16-7 Tools in the Top Pane Toolbar Common to All Views
Tool | Description |
---|---|
New |
Creates a placeholder for a new record in the selected list. |
Save |
Saves the current record. |
Undo |
Undoes changes made to the current record after the last save. |
Delete |
Deletes the selected record. If you delete a parent record, the child records are also deleted. When you delete a column from a table, the column is not automatically deleted from the index. The DAC does not display deleted objects. You must look at the database to figure out what objects were deleted. |
Query |
Opens a blank query field where you can enter query criteria. |
Refresh |
Retrieves the data from the repository with the last used query. |
Help |
Opens the context-sensitive online help system. |
Table 16-8 describes the commands available in the top pane toolbar when the Design view is active.
Table 16-8 Tools in the Top Pane Toolbar Specific to the Design View
Command | Description |
---|---|
Reference |
(Design view) Opens the Reference dialog, which enables you to copy objects from one container to another. See "About Object Ownership in DAC" for information about referencing objects. |
Assemble |
(Design view, Subject Areas tab) Assembles a subject area, with dimension and related tables as well as tasks. See Chapter 9, "Designing Subject Areas," for more information. |
Drop-down list |
(Design view) Enables you to filter the source system container objects that appear in the top pane list. |
Table 16-9 describes the commands available in the top pane toolbar when the Setup view is active.
Table 16-9 Tools in the Top Pane Toolbar Specific to the Setup View
Tool | Description |
---|---|
Generate |
(External Executors tab) Generates the external executor properties required for configuration. See Chapter 14, "Integrating DAC With Other ETL Tools," for more information. |
Test |
(External Executors tab only) Tests the configuration of the external executor properties. |
Table 16-10 describes the commands available in the top pane toolbar when the Execute view is active.
Table 16-10 Tools in the Top Pane Toolbar Specific to the Execute View
Tool | Description |
---|---|
Build |
(Execution Plans tab) Builds the execution plan, by assembling subject areas, tasks, indexes, tags, parameters, source system folders, and phases. See "Building and Running Execution Plans" for instructions. |
Run Now |
(Execution Plans tab) Starts a new ETL process. |
Stop |
(Current Runs and Run History tabs) Stops an ETL in progress. All currently running tasks will complete, and queued tasks will stop. The status of the ETL changes to Stopped. |
Abort |
(Current Runs and Run History tabs) Causes an ETL in progress to abort. All currently running tasks will be aborted. The status of queued tasks and the ETL itself will change to Stopped. |
Restart |
(Current Runs and Run History tabs) Restarts the selected ETL after the ETL has failed, stopped, or been aborted. |
Auto Refresh( |
Current Runs tab). Enables you to turn on and off the automatic screen refresh functionality and set the refresh interval. |
The commands available in the right-click menus depend on the tab that is active. For descriptions of the commands, see the following topics:
Table 16-11 describes right-click menu commands common to all tabs.
Table 16-11 Common Right-Click Menu Commands
Command | Description |
---|---|
Copy String |
Copies the contents of a cell (editable and read-only) to the clipboard. |
Paste String |
Pastes a string from the clipboard into a selected cell that supports a string data type. |
New |
Creates a placeholder for a new record in the selected list. |
Copy Record |
Creates a copy of the selected record, with a unique record ID. The new record is committed to the DAC repository when you click the Save button or click outside the cell. In the Design view tabs (except for the Indices tab), Copy Record copies the selected record and the record's child records. When you copy a subject area, the tables are also copied but the tasks are not copied. You need to use the Assemble command to reassemble the subject area and add tasks to it. In the Design view Indices tab and Setup and Execute views, Copy Record copies only the selected record. |
Save |
Saves the current record. |
Undo |
Undoes the changes made to the current record after the last save. |
Delete |
Deletes the selected record. If you delete a parent record, the child records are also deleted. When you delete a column from a table, the column is not automatically deleted from the index. You must manually delete columns from indexes that were deleted from a table or else the ETL process will fail. The DAC does not display deleted objects. You must look at the database to figure out what objects were deleted. |
Query |
Opens a blank query. |
Refresh |
Retrieves the data from the repository with the last used query. |
UI Preferences |
Enables you to configure the active tab to start in query mode. |
Output to File |
Outputs to a text file in the DAC root directory the contents of the current tab's record list. |
Record Info |
Displays the record's unique ID, object type, current source system, owner source system, and the timestamp for when it was last updated. It also displays the source system lineage and the source systems that reference the object. |
Update Records |
For some columns, enables you to update the column value for each row to a single value. |
This section provides descriptions of the Design view right-click menu commands.
Ownership Right-Click Commands
The Ownership right-click menu contains the following commands:
Reference. Opens the Reference dialog, which enables you to reference objects from one container to another. The reference function works like a symbolic link or shortcut.
Re-Reference. If an object is a referenced object, that is, a reference to an object in another container, and a change is made to the original object's child objects, use this command to import the changes to the referenced object.
Push to References. If an original object is changed, you can use this command to export the changes to all referenced objects' child objects.
De-Clone. When you make changes to a referenced object, the new object is called a clone. This command enables you to revert a cloned object back to its state as a reference.
Compare Clone with Original. Shows the difference between the clone and the base object that was modified.
Re-Assign Record. This command enables you to reassign an objects ownership.
For more information about the ownership of objects, see "About Object Ownership in DAC".
Add Object(s) to Patch
Enables you to add the selected object or all objects in the list to a patch. You specify the patch to which you want to add the object or objects, and then specify whether you want to add to the patch only the parent object or the parent object, child object, and extended properties. For more information about patches, see "Working With DAC Metadata Patches".
Assemble
(Subject Areas tab) Assembles a specified subject area by adding to the subject area the dimension and related tables as well as the associated tasks. For more information, see "Creating a Subject Area".
Generate Index Scripts
(Tables and Indices tabs) Generates drop and create index scripts and analyze tables scripts for all tables that participate in the ETL process. The results are stored in the log\scripts directory.
Generate DW Table Scripts for Oracle
(Tables tab and Target For Tasks (RO) subtabs in the Tables tab) Enables you to generate scripts for creating, upgrading, and dropping data warehouse tables for Oracle database types. For more information, see "Creating, Upgrading or Dropping Subsets of Tables in the Schema".
Related Tasks
(Tables tab) Shows all tasks that directly or indirectly participate in populating the selected table.
Change Capture Scripts
(Tables tab) For Siebel sources only.
Image and Trigger Scripts. Generates change capture scripts for tables with defined image suffixes. The scripts may include delete triggers, create and drop statements for delete triggers, and image tables and their indexes.
View Scripts. Generates change capture view scripts for full or incremental mode for tables that participate in the change capture process. This command can be used for unit testing.
Change Capture SQL. Generates change capture SQL scripts for full or incremental mode for tables that participate in the change capture process. This command can be used for unit testing.
Import from Database
(Tables tab)
Import Database Tables. Imports table definitions from a selected database. This action does not import columns.
Import Indices. Import index definitions from a selected database for one or more tables as listed in the result of the query.
Import Database Columns. Import column definitions from a selected database.
See "Adding a New Table and Columns to the Data Warehouse" for more information.
Add Actions
(Tables, Indices, and Tasks tabs) Enables you to add previously defined actions to tables, indices, or tasks. For more information about actions, see "Using Actions to Optimize Indexes and Collect Statistics on Tables".
Add Tables
(Tasks tab)
Source Tables. Enables you to add source tables to a task or tasks, by table name; table name and type; or task name, table name and type.
Target Tables. Enables you to add target tables to a task or tasks, and to specify Truncate Always, Truncate on Full Load, and Analyze Tables properties.
Read from Executor. Enables you to add source and target tables to a task or tasks by reading the tables from an external executor. You can also modify the Type for source tables and the Truncate Always, Truncate for Full Load, and Analyze Tables properties for target tables after they are read and before you save the new records.
The right-click dialogs contain additional instructions for adding source and target tables to tasks.
Import Foreign Keys
(Tables tab) Enables you to import foreign key columns from the data warehouse and associate the columns with foreign key tables. You can enter the table and column information in the text box or import the information from a file. If you select the Clone Referenced Records check box, DAC will make clones of any records that are referenced objects. If you do not select Clone Referenced Records, DAC will not import foreign keys for any referenced object.
Output Task Description
(Tasks tab) Saves to an HTML file the description for a selected task or for all tasks.
Synchronize Tasks
(Tasks tab) Imports the source and target table definitions for a task from Informatica into the DAC repository. You can also choose to activate tables that had previously been inactive or inactivate tables in DAC that are associated with the task but do not appear in Informatica.
Flat Views
Opens a dialog that enables you to query for various objects, modify data, and do mass updates.
Table 16-12 describes the Setup view right-click menu commands.
Table 16-12 Setup View Right-Click Menu Commands
Command | Description |
---|---|
Test Connection |
In Physical Data Sources tab, this command tests the database connection. In the Informatica Servers tab, this command tests the connection to the Repository Service and Integration Service. The DAC Server performs this command if the DAC Client is connected to a server. If the DAC Client is not connected to a DAC Server, then the DAC Client performs the command. |
Patches |
(Working Patches tab)
|
Table 16-13 describes the Execute view right-click menu commands.
Table 16-13 Execute View Right-Click Menu Commands
Command | Description |
---|---|
Run Now |
(Execution Plans tab) Runs the selected execution plan. |
Add Refresh Dates |
(Execution Plans tab) Prepopulates tables associated with the selected execution plan in the Refresh Dates subtab of the Physical Data Sources tab. This feature enables you to set or reset refresh dates manually in the Refresh Dates subtab of the Physical Data Sources tab before running an execution plan. Note: The Micro ETL Refresh Dates subtab of the Execution Plans tab is reserved for micro ETL processes. |
Reset source(s) |
(Execution Plans tab) Resets the refresh dates to null for all tables relevant to the execution plan. You can choose to reset refresh dates for tables in one or more source and target data sources. |
Add Object(s) to Patch |
(Execution Plans tab) Enables you to add the selected execution plan or all execution plans in the list to a patch. When you add an execution plan to a patch, only the execution plan parent object is listed in the Contents subtab of the Working Patches and Applied Patches tabs. However, the execution plan child objects will be exported to the patch XML file even though they do not appear in the Contents subtab. For information about which child objects are added to the patch XML file, see "About Patch Contents". |
Number of Tasks by Depth |
(Execution Plans tab) Shows how many tasks are at each depth level for each task execution graph. For more information, see "Performance Tuning the ETL Process Using Tasks by Depth Command". |
Show Concurrent ETL Dependency |
(Execution Plans tab) Shows if two execution plans are sharing any common source or target tables. When there are such instances, DAC will serialize their execution. If there is no commonality, they can be run in parallel. |
Build |
(Execution Plans tab) Builds the execution plan, by assembling subject areas, tasks, indexes, tags, parameters, source system folders, and phases. |
Auto Refresh |
(Current Runs tab) Enables you to set an automatic refresh frequency for the selected execution plan. |
Restart |
(Current Runs and Run History tab) Restarts the selected execution plan. |
Abort |
(Current Runs and Run History tab) Stops the execution of the selected execution plan and changes the status to Failed. |
Mark as Completed |
(Current Runs and Run History tabs) Changes the status of a stopped or failed ETL to Completed. In the audit trail for this ETL, the status is Marked as Completed. Use this command with caution. It can cause the data warehouse to be inconsistent. |
Get Run Information, Get Log File |
(Current Runs and Run History tabs) Fetches the log file for this run from the DAC Server and saves it in the \ServerLog directory. |
Get Run Information, Analyze Run |
(Current Runs and Run History tabs) Saves a description of the run as an HTML file in the Log\Statistics directory. |
Get Run Information, Get Chart |
(Current Runs and Run History tabs) Displays a chart showing changes in task statuses over time in a separate window. |
Get Run Information, Get Graph |
(Current Runs and Run History tabs) Displays a graph showing changes in task statuses over time in a separate window. |
Flat Views |
Opens a dialog that enables you to query for various objects, modify data, and do mass updates. |
Some of the DAC interface tabs have common elements, which are displayed as columns or subtabs. Descriptions of the common elements follow.
Name
The Name column in a tab specifies the name of the database object.
Inactive
The Inactive column indicates whether a database object is active or inactive. You can inactivate a repository object by selecting the Inactive check box. Inactive objects do not participate in the ETL process.
Owner
The Owner column specifies the source system container in which the database object was created. For more information about object ownership, see "About Object Ownership in DAC".
Edit
The Edit subtab enables you to enter definition information for a new object or to edit the existing definition of an object that is selected in the top pane.
Description
The Description subtab displays a description of the object selected in the top pane. You can also edit the existing description of an object or add a new description.
The Design view provides access to functionality related to creating and managing subject areas.
This section describes the Design view interface tabs.
A configuration tag is an object that controls the inclusion of tasks in subject areas. When a task is "tagged" with a configuration tag, it is not eligible to be included in the collection of tasks for any subject area, unless the tag is part of the subject area definition "Include Task" property.
For more information, see:
Include Tasks Column
If this check box is selected, the tasks that are assigned to a configuration tag will participate in the ETL process for the subject area to which this configuration tag is assigned.
For example, suppose Configuration Tag 1 is made up of Task 1 and Task 2, and Configuration Tag 1 is assigned to Subject Area 1. Task 1 and Task 2 will be executed when the execution plan for Subject Area 1 is executed, whether or not Task 1 and Task 2 relate to the tables that make up the subject area.
Lists the subject areas that belong to a configuration tag and enables you to add subject areas to a configuration tag.
Configuration Tag Tasks Only Column
This read-only field indicates whether configuration tag tasks are the only tasks associated with this subject area that will participate in the ETL process. If this check box is selected, only the tasks associated with the configuration tag will be chosen by DAC when the subject area is assembled.
See "Working with Configuration Tags" for more information.
Enables you to add or remove tasks from the configuration tag selected in the top pane.
For more information, see:
The Indices tab lists all the indexes associated with the selected source system container.
For more information, see:
Note:
It is recommended that you do not register any indexes for source tables.
For Teradata databases, only secondary indexes should be registered in DAC. You should not register primary indexes or the more complex indexes, such as single- and multi-table indexes, because they cannot be dropped and recreated. You can use SQL commands to drop and create such tasks in DAC.
Table Name Column
The table for which an index is created.
Index Usage Column
Specifies the index usage. Possible values are:
ETL. An ETL index is used during the ETL process. DAC creates ETL indexes after the first task runs that writes to the table on which the indexes are created.
Query. A query index is used during the reporting process and not during the ETL process. The last task that reads from or writes to a table creates the query indexes.
During ETL. Like an ETL index, a During ETL index is used during the ETL process. The difference between an ETL index and a During ETL index is that with During ETL indexes you can override when the index is created by assigning it to a specific task. The During ETL index will be created after the task to which you assigned it runs.
If you define a During ETL index, you need to make sure that the index is assigned to a task, or else the index will not be created. You should only assign one During ETL index to one task per execution plan. Only advanced DAC users with a thorough understanding of the execution plans should define During ETL indexes.
For instructions on defining a During ETL Index, see "Defining a During ETL Index".
Databases Column
Lists the databases associated with the selected index.
# Unique Columns Column
For unique indexes, the number of columns that will be unique.
Is Unique Column
Indicates whether the index is unique.
Is Clustered Column
Indicates whether the index is clustered. There can be only one clustered index per table.
Is Bitmap Column
Indicates whether the index is of the bitmap type.
Is Hash Column
Indicates whether the index is of the hash type.
Allow Reverse Scan Column
Applicable only for DB2-UDB databases. The index will be created with the Allow Reverse Scan option.
Always Drop & Create Column
Indicates whether the index will be dropped and created regardless of whether the table is being loaded using a full load or incremental load.
Always Drop & Create Bitmap Column
Indicates whether indexes of the bitmap type will be dropped and created regardless of whether the table is being loaded using a full load or incremental load.
Lists the actions that have been set up for the selected index. For a description of index actions, see the following:
Index actions can override the default index behavior for dropping and creating indexes by mode type (full load, incremental load, or both).
Action Type Column
The index action types are the following:
Create Index
Drop Index
Load Type Column
The load type specifies whether the SQL script is to be called for incremental runs, full runs, or both.
Action Column
The SQL statement or stored procedure that you define. You can define one or more SQL statements or stored procedures for each action.
Displays a list of columns the index is made of.
Position Column
The ordinal position of the column in the index structure.
Sort Order Column
Indicates whether the index sort order is ascending or descending.
The Container Specific SQLs tab lists custom SQL scripts that have been defined for container-specific heuristics rules and task actions.
You also use this tab to define new custom SQL scripts for heuristics rules and task actions.
The drop-down list on the rightmost side of the top pane toolbar enables you to switch between Heuristic custom SQL and Task Action custom SQL.
Heuristic Custom SQL
When you write custom SQL for a heuristics rule, it applies only to the specified source system container. For instructions on defining custom SQL, see "Writing Custom SQL for a Heuristics Rule".
For an overview of DAC heuristics, see "About DAC Heuristics".
Task Action Custom SQL
You define custom SQL for a task action by clicking in the Value field of the Edit subtab. Then, follow steps 4 through 10 in the procedure, "Defining a SQL Script for an Action". After you define a Task Action custom SQL, you assign it to a task using the Edit subtab in the Tasks tab. Select the Execution Type, "Container Specific Task Action," and then click in either the "Command for Incremental Load" or "Command for Full Load" field to display a list of Task Action custom SQLs. Select the appropriate custom SQL, and save the record.
The Source System Folders tab enables you to associate the logical folder in DAC with the physical Informatica folder.
Logical Folder Column
The name of the logical folder in DAC. This name is used in the task definition so that task definitions do not have to be cloned.
Physical Folder Column
The name of the physical Informatica folder. This name represents the actual folder in the Informatica repository. This name is used in the Ordered Tasks subtab of the Execution Plans tab.
The Source Systems Parameters tab holds the source system parameters that apply to all tasks under a source system container. This tab enables you to view and edit existing source system parameters and to define new parameters.
For more information, see:
Data Type Column
The source system parameter data type. For a description of data types, see "Parameter Data Types".
Load Type Column
Specifies whether the parameter applies to full load commands, incremental load commands, or both.
Value Column
The parameter value.
A subject area is a logical grouping of tables related to a particular subject or application context. It also includes the tasks that are associated with the tables, as well as the tasks required to load the tables.
Subject areas are assigned to execution plans, which can be scheduled for full or incremental loads.
The Subject Areas tab lists all the subject areas associated with the selected source system container. It enables you to view and edit existing subjects areas and to create new ones.
For more information, see:
Configuration Tag Tasks Only Column
This column indicates whether configuration tag tasks are the only tasks associated with this subject area that will participate in the ETL process. If this check box is selected, only the tasks associated with the configuration tag will be chosen by DAC when the subject area is assembled.
For more information, see:
Last Designed Column
The timestamp indicates when the subject area was last designed.
The Configuration Tags subtab lists the configuration tags that are associated with this subject area and enables you to add new configuration tags to the subject area.
For more information, see:
Include Tasks Column
This read-only field indicates whether the configuration tag tasks will be executed.
Context Disabled Column
When this read-only check box is selected, the configuration tag is globally disabled.
The Extended Tables (RO) subtab is a read-only tab that lists the extended tables associated with the selected subject area.
Included Column
Indicates whether the extended table is included in the selected subject area. If the check box is selected, the extended table will be included in the subject area assembly process.
Lists the tables that are associated with the selected subject area. It enables you to add tables to subject areas or to remove them.
For more information, see:
Name Column
The name of the table associated with the selected subject area.
Lists the tasks associated with the selected subject area. It enables you to add tasks to and remove tasks from a subject area and to inactivate tasks.
When you inactivate a task, it remains inactive even if you reassemble the subject area. When you remove a task from a subject area, it will be added back to the subject area upon reassembly.
For more information, see:
Parent Group Column
If the task belongs to a task group, this column displays the task group name.
Phase Column
The task phase of the ETL process.
Autogenerated Check Box
Indicates whether the task was automatically generated by the DAC's task generation process.
Is Group Check Box
Indicates whether the task belongs to a task group.
Primary Source Column
Connection for the primary source.
Primary Target Column
Connection for the primary target.
The Task Source Tables (RO) subtab opens in query mode and is read only. It enables you to query by task name, table name, table type or data source for the source tables for the tasks associated with the selected subject area.
The Task Target Tables (RO) subtab opens in query mode and is read only. It enables you to query by task name, table name, table type or data source for the target tables for the tasks associated with the selected subject area.
From this tab, you can access the Generate DW Table Scripts for Oracle right-click command, which enables you to update a subset of tables in the data warehouse schema for Oracle database types. For more information, see "Creating, Upgrading or Dropping Subsets of Tables in the Schema".
The Tables tab lists the physical database tables defined in the database schema that are associated with the selected source system container. It enables you to view and edit existing tables and to create new ones.
For more information, see:
Warehouse Column
Indicates whether the table is a warehouse table. If this option is not selected, the schema creation process will not include this table.
Image Suffix Column
Suffix for image tables. Applicable only to Siebel source tables. For more information about image tables, see the description of the Change Capture Scripts command in the section "Design View Right-Click Menu Commands".
Is MultiSet Column
Indicates whether the table is a MultiSet table. Applicable only to Teradata databases.
Has Unique Primary Index Column
Indicates whether the table has a Unique Primary Index. Applicable only to Teradata databases.
In Memory
Indicates whether the target database type is In Memory. This check box must be selected for tables to be created, updated, or dropped in a TimesTen database. For more information, see Appendix B, "Oracle TimesTen Database Guidelines."
Lists the actions that have been set up for the selected table. For a description of task actions, see the following:
The table action types enable you to trigger SQL scripts to analyze or truncate tables. Table actions for analyzing or truncating tables override all other table properties.
Action Type Column
The default actions on a table during an ETL process are truncating a table and analyzing a table. If you want to override any of these syntaxes, you can define actions by mode (full or incremental). Once an action is defined, it overrides the default behavior.
Note: You can associate an action to multiple objects at the same time. First, identify the objects that you want to associate an action with by using the query functionality. Then, right-click on the results displayed, and select Associate Actions. Next, select an action and the mode you want to associate it with.
The table action types are the following:
Analyze Table
Use this type to analyze tables.
Truncate Table
Use this type to truncate tables.
Load Type Column
The load type specifies whether the SQL script is to be called for incremental runs, full runs, or both.
Action Column
Actions are the SQL statement or stored procedure that you define. You can define one or more SQL statements or stored procedures for each action.
Displays a read-only list of tasks that have defined the selected table as one of the conditional tables.
Task Phase Column
Task phase of the ETL process. This information is primarily used for dependency generation. Certain phases, such as Change Capture and Query Index Creation, are not available for you to assign to a task. The DAC Server uses the task phase to prioritize tasks and to generate a summary of the time taken for each of the phases.
The Columns subtab displays the columns that belong to the selected table.
Position Column
An integer that represents the ordinal position of the column in the table structure.
Data Type Column
The data type of the column. Possible values are the following:
NUMBER. Indicates the data is an integer.
CHAR. Indicates a fixed-length character string. All values stored in a CHAR column have the length specified in the Length column (in the Columns subtab). If you insert a value that is shorter than the column length, blank spaces are added to the value up to the specified column length. If you try to insert a value that is too long for the column, an error is returned.
VARCHAR. Indicates the data is a variable-length string, which can be from 0 to 4000 characters long. You must specify a string length in the Length column.
DATE. A string that represents a date and time.
TIMESTAMP. A string that represents a data, time, and fractional seconds.
Length Column
An integer value that indicates how many characters a column can hold.
Precision Column
An integer value that indicates the total number of digits to the left and right of the decimal point.
Nullable Column
Indicates the column can have a NULL value. "NULL" stands for the absence of a known value. If the Nullable check box is not selected, the column is declared NOT NULL, which means the column must contain a value. When inserting a row without providing an explicit value for a column, that column will get the default value if the column has one or NULL if the column is not specified as NOT NULL. Otherwise, an error is generated
Unicode Column
Indicates whether the column is Unicode.
Foreign Key to Table Column
Indicates the table in which the column is a foreign key.
Foreign Key to Column
Specifies the column that is a foreign key.
Default Value Column
The default value must be the same data type as the column. If the value is a CHAR or VARCHAR type, the value must be enclosed within quotes ("<value>"). If the value is a number, it does not need to be enclosed within quotes.
Teradata Length Column
Applicable to Teradata databases only. Specifies the column length.
Teradata Precision Column
Applicable to Teradata databases only. Specifies the column precision.
Teradata Primary Index Order Column
Applicable to Teradata databases only. On Teradata databases, it is important to have a primary index for every table. This column indicates the position that a given column will have in the primary index. If the value is 0 or empty, the column is not part of a primary index.
For Teradata Statistics Column
Applicable to Teradata databases only. If the check box is selected, DAC will collect statistics on the specified column.
In Memory Column
Indicates whether the target database type is In Memory. This check box must be selected for tables to be created, updated, or dropped in a TimesTen database. For more information, see Appendix B, "Oracle TimesTen Database Guidelines."
In Memory Length Column
Specifies the column length for an In Memory (TimesTen database) column. A value in this field overrides the default column length.
In Memory Precision Column
Specifies the column precision for an In Memory (TimesTen database) column. A value in this field overrides the default column precision.
The Indices (RO) subtab displays a read-only list of indexes that belong to the selected table.
Index Usage Column
Specifies the index usage type as ETL, Query, or During ETL. An ETL index is typically used during the ETL process. A Query index is an index used only during the reporting process. It is recommended that you have a clear understanding of when and where the index will be used at the time you register the index. For a description of the possible values see, "Index Usage Column".
# Unique Columns Column
For unique indexes, the number of columns that will be unique.
Is Unique Column
Indicates whether the index is unique.
Is Clustered Column
Indicates whether the index is clustered. There can be only one clustered index for each table.
Is Bitmap Column
Indicates whether the index is of the bitmap type.
Allow Reverse Scan Column
Applicable only for DB2-UDB databases. The index will be created with the Allow Reverse Scan option.
Applicable to Teradata databases only. Lists the columns for which multi-column statistics are collected.
The Related Tables subtab lists tables that are related to the selected table. Related tables participate in the ETL process in addition to the tables that are associated with this table.
The Source for Tasks (RO) subtab displays a read-only list of tasks that use the selected table as a source.
Task Phase Column
Task phase of the ETL process. This information is primarily used for dependency generation. Certain phases, such as Change Capture and Query Index Creation, are not available for you to assign to a task. The DAC Server uses the task phase to prioritize tasks and to generate a summary of the time taken for each of the phases.
Build Image Column
Applicable for Siebel transactional sources only. Indicates change capture for the primary/auxiliary source tables will be executed.
Type Column
Table type.
Data Source Column
Data source for the task.
The Target for Tasks (RO) subtab displays a read-only list of tasks that use the selected table as a target.
Task Phase Column
Task phase of the ETL process. This information is primarily used for dependency generation. Certain phases, such as Change Capture and Query Index Creation, are not available for you to assign to a task. The DAC Server uses the task phase to prioritize tasks and to generate a summary of the time taken for each of the phases.
Data Source Column
Data source for the task.
Truncate Always Column
Indicates the target table will be truncated regardless of whether a full or incremental load is occurring.
Truncate for Full Load Column
Indicates the target tables will be truncated only when a full load is occurring.
Analyze Tables Column
Indicates the tables will be analyzed during the ETL process.
The Task Groups tab lists all the task groups associated with the selected source system container. It also enables you create new task groups. Note: A task can belong to only one task group.
For more information, see:
Restart All on Failure Column
Indicates the tasks in this task group will be restarted if one or more tasks fails during an ETL process.
Execute Serially Column
Indicates the tasks in this task group will be executed sequentially. This property overrides the execution order.
Truncate Always Column
Indicates the target tables will be truncated regardless of whether a full or incremental load is occurring. Any indexes registered for the target table are dropped before the command is executed and recreated after the command completes successfully. When indexes are dropped and created, the table is analyzed so that the index statistics are up-to-date. Make sure if you select this option that all the tasks write to the same data source.
Truncate for Full Load Column
Indicates the target tables will be truncated only when a full load is occurring. Any indexes registered for the target table are dropped before the command is executed and recreated after the command completes successfully. When indexes are dropped and created, the table is analyzed so that the index statistics are up-to-date.
The Child Tasks subtab lists the tasks that belong to the selected task group. It also enables you to add child tasks to a task group selected in the top pane.
Task Phase Column
Task phase of the ETL process.
Primary Logical Source Column
Logical database connection for the primary source database.
Primary Logical Target Column
Logical database connection for the primary target database.
Dependency Order Column
Order among the tasks in the task group in which this task will be executed. If two or more tasks have the same execution order and the Execute Serially flag is not checked, DAC will run the tasks in parallel.
Heuristic Driver Column
Indicates this task will specify whether the entire task group is executed. You can specify multiple tasks as heuristic drivers. For more information, see "DAC Heuristics and Task Groups".
The Source Tables (RO) subtab lists the tables used for extracting data by the selected task group.
Table Column
Name of source table.
Task Column
Task that extracts data from the table.
Type Column
Source table type. If a table is marked as Primary or Auxiliary and the Build Image property of the task is selected, the change capture process is invoked. There are special tasks that force the base table data to be extracted when data in auxiliary tables changes. A table can be neither Primary nor Auxiliary but still be used for getting some attributes to populate a dimension or fact table. The changes in these kinds of source tables are not reflected in the dimension or fact table once the data is populated.
The Target Tables (RO) subtab is a read-only tab that lists the tables into which the task group loads data.
Table Column
Name of the target table.
Task Column
Task that loads data into the target table.
The Tasks tab lists all the tasks associated with the selected source system container. For more information, see:
Parent Group Column
If the task is a member of a group, this field lists the task group name.
Group Order Column
The order in which the task is defined to execute in a certain group.
Command for Incremental Load Column
A table can be loaded in full mode or incremental mode. Full mode refers to data loaded for the first time or data that is truncated and then loaded. Incremental mode refers to new or changed data being added to the existing data.
DAC maintains a last refresh timestamp whenever a table is changed during the ETL process. (You can view this timestamp by selecting the Refresh Dates subtab on the Physical Data Sources tab.) If a table has a timestamp, the command appearing in this column is executed. If a table does not have a timestamp, the command for a full load is executed. If the execution type is Informatica, the workflow name is used as the command.
Command for Full Load Column
If a table has no last refresh timestamp, this command is executed.
Logical Folder Column
Only for execution type of Informatica. The folder in which the workflow resides. Note: The name cannot contain spaces.
Primary Source Column
Logical database connection for the primary source database.
Primary Target Column
Logical database connection for the primary target database.
Task Phase Column
Task phase of the ETL process. This information is primarily used for dependency generation. Certain phases, such as Change Capture and Query Index Creation, are not available for you to assign to a task. The DAC Server uses the task phase to prioritize tasks and to generate a summary of the time taken for each of the phases.
Execution Type Column
Tasks are executed based on their execution type. Possible values are the following:
Informatica. Task is invoked on an Informatica Integration Service using pmcmd.
External Program. Task is an operable program on the operating system where the DAC Server is running. This program can be a batch file, shell script, or any other program that can be run like a bulk loader.
SQL File. Task is a SQL script in XML or SQL format.
Stored Procedures. Task is a stored procedure that has been defined on the database.
In addition, there are several internal execution types that you will not be able to select when creating new tasks. Tasks of these types are categorized as either internal change capture tasks or internal data warehouse tasks. All of these tasks are color-coded in pink in the Tasks tab.
IMG_BUILD
Used for internal change capture. If you are using multiple Siebel transactional sources, you cannot change the behavior of the change capture process. This task requires change capture tables to be created on the other sources also. When adding additional Siebel sources, on the Tables tab in the Design view, right-click and select Change Capture Tasks. This action generates change capture tasks. Use this same action to disable or delete change capture tasks.
IMG_SYNC
Used for internal change capture. If you are using multiple Siebel transactional sources, you can create this task for the additional tasks for doing similar change capture sync processes. You cannot change the behavior of the change capture sync process. This task requires change capture tables to be created on the other sources also. This task should be used with discretion for Siebel sources only.
Execution Priority Column
Indicates the order in which the task is executed. If two or more similar tasks (tasks having that same phase, similar truncate properties, the same number of successors, and the same number of source tables) have the same priority, the order occurs randomly.
Build Image Column
Applicable for Siebel transactional sources only. Indicates the change capture for the primary/auxiliary source tables will be executed.
Continue on Error Column
When this check box is selected, if the command fails, the dependent tasks are not stopped. However, if any autogenerated tasks fail, the dependent tasks are stopped.
The Actions subtab lists the actions that have been set up for the selected task. For a description of task actions, see the following:
Action Type Column
Action types are predefined categories of task behaviors that trigger the execution of a SQL script. The following types are available:
Preceding Action. Use to execute a SQL script before a task runs.
Success Action. Use to execute a SQL script after a task runs successfully.
Failure Action. Use to execute a SQL script if a task fails during its execution.
Restart Action. Use to execute a SQL script when a task that previously failed is restarted.
Upon Failure Restart Action. Use to execute a SQL script to restart a task that fails.
Load Type Column
The load type specifies whether the SQL script is to be called for incremental runs, full runs, or both.
Action Column
You define the task action in the form of a SQL statement or stored procedure. You can define one or more SQL statements for each action. Double-click in the Action field to open the Choose Action dialog, where you can select the appropriate action.
You define the SQL statement or stored procedure for an action in the Task Actions dialog, which you access by selecting Tools, then Seed Data, then Actions, and then Task Actions. For instructions, see "Defining a SQL Script for an Action".
Lists the tables that, if included in an execution plan, cause the optional task selected in the top pane to be executed.For example, the Order Item fact table is a conditional table associated with the optional task called UpdateRecencyCat in Person Dimension. The UpdateRecencyCat in Person Dimension task is executed only when the Order Item fact table is included in an execution plan.
Lists the configuration tags to which the selected task belongs. It also enables you to associate the selected task with a configuration tag.
For more information, see:
Include Tasks Column
This read-only field indicates whether the configuration tag tasks will be executed.
Context Disabled Column
If this check box is selected, the configuration tag is globally disabled.
Enables you to specify the following extended properties for the task selected in the top pane:
Looping of workflows. Enables you to configure the full and incremental load commands for tasks to repeat (or loop) multiple times during the execution of an ETL process. For instructions, see "Looping of Workflows".
Heuristics. Enables you to assign a heuristics rule to a task. For instructions, see "Associating a Heuristics Rule With a Task".
Lists the During ETL index that is assigned to the selected task. Also enables you to assign a During ETL index to the selected task.
Lists the parameters associated with the selected task and enables you to configure task level parameters. This parameter takes precedence over source system parameters when the name is the same. For more information about parameters, see "Defining and Managing Parameters".
Name Column
Name of the parameter.
Data Type Column
Parameter data type. For a description of data types, see "Parameter Data Types".
Load Type Column
Indicate whether the parameter applies to full load commands, incremental load commands, or both.
Value Column
The parameter value.
Comments Column
A text field in which you can add comments.
Lists the task phase dependencies for the selected task. Also enables you to define task phase dependencies. The DAC Server uses the task phase dependency to prioritize tasks. By changing the phase dependency properties of a task, you change the task's execution order. For instructions on setting a task phase dependency, see "Setting a Task Phase Dependency".
Action Column
The action to be taken in relation to the phase dependency. Possible values are the following:
Wait. Indicates the selected task will wait to be executed until the tasks of a specified phase have been executed.
Block. Indicates the selected task will block all tasks of the specified phase from being executed until is has been executed.
Grain Column
Applicable only for blocks. Enables you to specify whether the action you choose affects all tasks of a specified phase or related tasks. Possible values are the following:
All. Indicates the action will affect all tasks.
Related. Indicates the action will affect only related tasks. You can view a task's related tasks by navigating to the Execution Plans tab, All Dependencies subtab and viewing the specified task's predecessor tasks.
Scope Column
For multi-source execution plans only. Specifies how the Block action of the phase dependency behaves in relation to multi-source execution plans. Possible values are the following:
Both
Indicates the blocking action is active for tasks that have the same source and target physical data source connections.
Source
Indicates the blocking action is active for tasks that have the same source physical data source connection.
Target
Indicates the blocking action is active for tasks that have the same target physical data source connection.
None
Indicates the blocking action is active for all tasks regardless of the source and target physical data source connections.
Phase Column
The ETL phase that will apply to the Action and Grain properties.
Lists the refresh date tables for the selected task. Also enables you to add or remove refresh date tables.
When DAC executes a task, it determines the read and write mode based on the refresh dates that DAC stores for the database connection and the source and target table name combination of that task. However, if you want to determine the mode of both read and write operations based on a table that is not a source or target table, you can define the table as a refresh date table.
The Refresh Dates (RO) subtab is a read-only tab that lists the refresh dates by table for the selected task. For more information about refresh dates, see "About Refresh Dates and DAC's Incremental Load Strategy".
The Source Tables subtab lists the tables from which the selected task extracts data.
Type Column
Table type. Possible values are the following:
Primary
Indicates the table is a primary source of data.
Auxiliary
Indicates the table is a secondary source of data.
Lookup
Indicates the table is a lookup table.
Note:
If a table is marked as Primary or Auxiliary and the Build Image property of the task is selected, the change capture process is invoked. There are special tasks that force the base table data to be extracted when data in auxiliary tables change.
A table can be neither Primary nor Auxiliary but still be used for getting some attributes to populate a dimension or fact table. The changes in these kinds of source tables are not reflected in the dimension or fact table once the data is populated.
Data Source Column
Data source for the table. When a data source is not specified, the default is the task's primary source.
Alias Column
An alternative name for a source table.
When multiple tasks from different execution plans read from the same source table and write to the same target table, it is necessary to define an alias for one of the source tables in order for DAC to track refresh dates accurately.
DAC stores refresh dates by the combination of data source and table name. If an alias is not defined in situations in which tasks from two different execution plans read from the same source table and write to the same target table, issues with data inconsistency can arise.
This read only tab lists the subject areas that are associated with the task selected in the top pane.
The Target Tables subtab lists the tables into which the selected task loads data.
Type Column
Table type.
Data Source Column
Data source for the target table. If no data source is specified, this value defaults to the task's primary target.
Truncate Always Column
Indicates the target tables will be truncated regardless of whether a full or incremental load is occurring. Any indexes registered for this table are dropped before the command is executed and recreated after the command completes successfully. When indexes are dropped and created, the table is analyzed so that the index statistics are up-to-date.
Truncate for Full Load Column
Indicates the target tables will be truncated only when a full load is occurring. Any indexes registered for this table are dropped before the command is executed and recreated after the command completes successfully. When indexes are dropped and created, the table is analyzed so that the index statistics are up-to-date. When the Truncate Always option is selected, this option is unnecessary.
Analyze Column
Indicates the tables will be analyzed during the ETL process.
The Setup View provides access to functionality related to setting up DAC system properties, Informatica servers, database connections, and email notification.
This section describes the Setup view interface tabs.
The Applied Patches tab lists patches that have been applied to the repository.
For more information, see:
Status Column
Status of the applied patch. Possible values are the following:
Completed. All objects in the patch were successfully applied to the DAC repository without errors.
Incomplete. The patch application process completed with one or more errors. Some objects may have been applied successfully to the DAC repository.
Patch Version Column
Patch version of the patch. The DAC automatically assigns the patch version value as 1 and the status as Open. When you change the status from Closed to Open, the patch version is automatically incremented by 1.
Repository Created Time Column
Local time of the machine that hosts the DAC repository when a patch is created.
Last Closed Column
Timestamp of when the patch was last closed.
Applied Time Column
Timestamp of when the patch was last applied.
Last Exported Column
Timestamp of when the patch was last exported.
Deleted From Column
Starting timestamp of the period for which you want to collect objects deleted from the source.
Deleted To Column
Ending timestamp of the period for which you want to collect objects deleted from the source.
Other Instructions Column
Provides a text box where you can enter information and instructions related to the patch.
Application Details Column
Information about the application of the patch to the target repository.
Lists the child patches of the applied patch that is selected in the top pane.
Added to Patch Version Column
Patch version of the patch to which the child patch was added.
Date Added Column
Timestamp of when the child patch was added to the patch.
Status Column
Status of the child patch. Possible values are:
Completed. The patch was imported without any errors.
Failed. An error occurred when the patch was imported, and the process failed.
Not Applied. Either the container does not exist or the user does not have privilege to save the object in the container.
Comments Column
Provides a text box where you can enter comments related to the patch.
Lists the parent and child objects that make up the contents of the applied patch selected in the top pane.
Type Column
Type of object added as contents to the patch.
Container Column
Parent container of the object added to the patch.
Added to Patch Version Column
Patch version of the patch to which the child patch was added.
Date Added Column
Timestamp of when the object was added to the patch.
Comments Column
Provides a text box where you can enter comments related to the patch.
Source System
Name of the source system the object belongs to.
Lists the objects deleted from the source system that were added to the patch.
Type Column
Type of object deleted from the source system.
Container Column
Parent container of the object deleted from the source system.
Date Deleted
Timestamp of when the object was deleted from the source system.
Status Column
Status of the deleted object. Possible values include:
Succeeded. The object was found in the target repository and successfully deleted.
Failed. The object was found in the target repository but was not deleted from the target repository because of an exception or error.
Comments Column
Provides a text box where you can enter comments related to the patch.
Added to Patch Version Column
Patch version of the patch to which deleted objects were added.
Source System
Name of the source system the object was deleted from.
The DAC System Properties tab enables you to configure various properties that determine the behavior of the DAC Server.
Allow Clients to Remember User Password
When set to True, this property enables the DAC clients to remember the user's password when the user logs into the DAC repository. User passwords are encrypted and stored in the client machine's file system
Analyze Frequency (in days)
For DAC repository tables, this property specifies the frequency (in days) the DAC Client automatically updates the table and index statistics for the DAC repository. The value must be numerical.
The DAC repository tables need to have table and index statistics kept up to date to maintain the health of the repository.
Analyze Table After Query Index Creation
Possible values are True and False.
If this property is set to True, tables will be analyzed after the Query type indexes have been created (even if the Analyze check box for the target table is deselected).
If this property is set to False, the table is not analyzed (as long as the Analyze check box for the target table is deselected).
Auto Restart ETL
Possible values are True and False.
When this property is set to True, an ETL process that is running when the DAC Server abnormally terminates will continue running when the DAC Server is restarted.
When set to False, an ETL process that is running when the DAC Server abnormally terminates will not automatically restart when the DAC Server restarts. The ETL status will be updated to Failed. An administrator will have to manually restart the ETL.
Concurrency Level
The maximum number of ETL processes that can run in parallel. When this property is set to 1 (the default), the ETL process runs in the same process space with the DAC Server. When this property is set to more than 1, the ETL processes run as separate OS processes. DAC interprets invalid values as 1.
Drop and Create Change Capture Views Always
Possible values are True and False.
When set to True (the default value), the DAC Server drops and creates change capture views every time it performs a change capture process, including for both full and incremental loads.
When set to False, the DAC Server will drop and create views selectively, using the following rules:
In full mode:
During the change capture phase, views will be dropped and created as full views.
During the change capture sync process, incremental views will be generated.
In incremental mode:
If the view exists, it will not be dropped and created.
If the view does not exist, the incremental view will be created.
Dryrun
Possible values are True and False.
Indicates whether tasks are executed without invoking Informatica workflows. The following processes are executed: change capture, truncation of tables, drop and creation of indexes, and analyze statements.
This option should be used for debugging purposes only and not used in a production environment.
Generic SQL Concurrency Limit
When the execution plan includes several tasks with the execution type of SQL or Stored Procedure, this property controls how many of such tasks will be run concurrently at any given time. The value should be numerical. The changes will be effective for the next ETL. Restarting the DAC Server is not required.
Generic Task Concurrency Limit
Determines how many tasks with execution types other than Informatica can be run concurrently. The value must be numerical.
To set this value, you should consider what the external tasks do. For example, if the tasks open connections to a database, you should consider how this would affect the predefined tasks. The value must be numerical.
HeartBeatInterval
The frequency (in seconds) the DAC Server polls all subsystems, including database connections, to check whether they are in a consistent state. If the subsystems are not in a consistent state, recovery and correction mechanisms are triggered. The value must be numerical. For example, a value of 900 (the default value) indicates the system will perform subsystem diagnostics and recovery procedures every 900 seconds.
InformaticaFileParameterLocation
Directory where the Informatica parameter files are stored. This is the directory in which the DAC Server creates the Informatica parameter files and from which Informatica reads the files.
No Run
Possible values are True and False.
When this property is set to True, tasks are generated in the Task Details subtab of the Current Runs tab, but the tasks are not executed and the refresh dates are not updated. Use this property for debugging purposes only.
Output Redirect
Possible values are True and False.
When set to True, logging information and standard output and errors are redirected to files in the log directory. The file containing standard output starts with out_ and ends with the .log extension. The standard error messages are in the file starting with err_ and ending with the .log extension.
If this property is set to False, the logging information is directed to the machine's standard output and error files, which typically defaults to the console from which the DAC Server was launched if the server was launched in a visible console mode. If the server is launched as a Windows service, the logging information is directed to the service log. If the DAC Server was launched with the command shell not visible, all logging information is deleted.
Repository DB Pool Size
Indicates the maximum number of connections to the DAC Repository that the DAC Server will maintain.
The idle DAC Server maintains a small spool of pre-existing database connections to the DAC Repository for efficiency and resource optimization. It increases the pool size to the value specified in this property when running a regular ETL process and reverts to the default on completion of the ETL process.
You should consult with the DBA for the DAC Repository database when setting this property.
Repository Name
Unique name for the DAC Repository. This name is displayed on the title bar.
Repository Upgrade Lockdown
Possible values are True and False.
When set to True, the DAC Repository cannot be upgraded to a newer version.
SQL Trace
Possible values are True and False.
When set to True, a hint is sent to the database connectivity layer of the DAC Server to enable SQL tracing. Thus, every SQL statement that is run by the DAC Server is spooled to the appropriate output log file.
It is recommended that you set this property to False, because enabling SQL tracing causes the log files to grow extremely large in a short period.
Scheduler.Poll.Interval
Frequency (in seconds) the DAC Server polls for changes in the schedule configuration.
Script After Every ETL
The name of the script or executable to be run after every execution plan. The file must be placed in the DAC\scripts directory. For example, after running an execution plan, you might want to run a process or perform certain tasks. These can be contained in a script or executable.
Script Before Every ETL
The name of the script or executable to be run before every execution plan. The file must be placed in the DAC\scripts directory. For example, before running an execution plan, you might want to run a process or perform certain tasks. These can be contained in a script or executable.
The execution plan will run only after the external process has finished. Therefore, it is important that the script or executable does not fail.
Server Log Level
The output logging level. Possible values are the following:
FINEST
FINER
FINE
CONFIG
INFO
WARNING
SEVERE
The values are case sensitive. The values produce varying degrees of output reporting detail, with the Severe value producing minimal log details, and the Finest value producing the most extensive amount of reporting.
The recommended value for production environments is INFO.
Test Run
Possible values are True and False.
When set to True, the execution plan will not stop on errors.
Verify and Create Non-Existing Indices
Possible values are True and False.
When set to True, indexes defined in the DAC Repository will be automatically created in the data warehouse database during an incremental load.
Note: When this system property is set to True, the DAC Server verifies whether indexes defined in the DAC Repository are also defined in the data warehouse database. This verification process can delay the execution of an execution plan.
WebClient.Poll.Interval
Frequency (in seconds) the DAC Client polls the DAC Server for its status when the DAC Server is running in Web mode.
WorkerPoolSize
The number of worker threads available in the worker pool.
The worker threads do operations such as:
drop and create indexes
truncate and analyze tables
execute SQL scripts and Informatica workflows
The value for this property should correspond to the number of task details that are anticipated to run in parallel.
This tab enables you to set up a list of email addresses that will be notified about the status of the ETL process. For instructions, see "Setting Up Email Notifications in the DAC Client and Server".
Name Column
Logical name of the user to be notified.
Email Address Column
Email address where the notification is sent.
Notification Level Column
The notification levels are as follows:
10
Notifies recipient of success or failure of each task.
5
Notifies recipient of success of failure of the entire ETL process.
1
Notifies recipient that ETL completed successfully.
You can configure DAC to integrate with external executors other than Informatica. This tab enables you to view, edit, and create external executor integrations with DAC. See Chapter 14, "Integrating DAC With Other ETL Tools" for more information.
Name Column
Name of the external executor you are integrating with DAC.
Type Column
Execution type for the tasks that will be executed by the external executor. You define the execution type by going to Tools, Seed Data, Execution Type. See "Registering an External Executor in DAC".
The Properties subtab contains the properties that you must configure to integrate DAC with other ETL tools.
For information about External Executor properties, see Chapter 14, "Integrating DAC With Other ETL Tools" for more information.
For information about External Executor properties specific to data copy processes, see "Creating an External Executor of Type Data Copy".
Property Order Column
Used to sort the records in the subtab. Does not affect the functionality of the external executor.
Name Column
Name of the property.
Value Column
The value that defines the property.
The Informatica Servers tab enables you to register one or more Informatica Integration Service services and one Informatica Repository Service and to specify how many workflows can be executed in parallel on each Integration Service. The DAC Server automatically load balances across the Integration Service services.
Note:
You can install multiple Informatica Integration Service services and point them to a single Informatica repository. You need to register each Informatica Service in DAC and specify a unique machine name and service name. For instructions on registering an Informatica Integration Server and Repository Service in DAC, see "Registering Informatica Services in DAC".
Name Column
Name of the Informatica Integration Service or Repository Service.
Type Column
Type of server.
Informatica
Specifies the Informatica Integration Service.
Repository
Specifies the Informatica Repository Service.
Service Column
The Informatica Integration Service name.
Server Port Column
Port number used by the Informatica Integration Service or Repository Service to listen to requests.
Domain Column
Informatica domain name (as seen in Informatica Administrator.
Login Column
Password Column
Informatica repository user name who has Administrator privileges for the Informatica repository. Note that DAC must log in to Informatica as an Informatica repository Administrator user that is configured in the native security domain.
Password for the user specified in the Login field.
Num Parallel Workflows per EP Column
The maximum number of workflows that can be executed in parallel on the Informatica Integration Service.
Repository Name Column
Name of the Informatica Repository Service that manages the Informatica repository for Oracle BI Applications.
Note: You deploy only one Informatica Repository Service, but you can deploy multiple Informatica Integration Service services.
The Physical Data Sources tab lists the connection pool information for the transactional and data warehouse databases. In this tab, you can view and edit existing physical data source connections and create new ones. For instructions on setting up connection pools, see "Setting Up Physical Data Sources".
Name Column
Logical name for the physical data source.
Type Column
Physical data source type. Possible values are the following:
Source
Warehouse
Informatica Repository
DAC Repository
Other
Connection Type Column
Type of database connection. Possible values are the following:
BI Server
For Oracle Fusion Applications only. Indicates the BI Server is registered as a data source.
For instructions on registering the BI Server in DAC, see "Integrating DAC and Oracle BI Server".
Oracle (OCI8)
Connects to an Oracle database using the tnsnames entry.
Oracle (Thin)
Connects to an Oracle database using thin driver.
DB2
DB2 UDB database.
DB2-390
DB2 390 database.
MSSQL
Microsoft SQL Server database.
Teradata
Teradata database.
Flat File
Connection String Column
If you are using:
Oracle (OCI8): Use the tnsnames entry.
Oracle (Thin): Use the instance name.
SQL Server: Use the database name.
DB2-UDB/DB2-390: Use the connect string as defined in the DB2 configuration.
Teradata: Use the database name.
Table Owner Column
Name of the table owner.
Num Connections per EP Column
Maximum number of database connections this connection pool can contain.
DBHost Column
Host machine where the database resides. This field is mandatory if you are using Oracle (Thin), MSSQL, or Teradata, but is not required if you are using Oracle (OCI8), DB2, or DB2-390.
Port Column
Port where the database receives requests. Required for Oracle (Thin) and MSSQL databases. Not required for Oracle (OCI8), DB2, or DB2-390, or Teradata databases.
Source Priority Column
Indicates the order in which DAC loads data from different sources participating in a multi-source execution plan. This property ensures that tasks attempting to write to the same target table will not be in conflict. If two sources are given the same priority, DAC will randomly stagger the tasks.
Data Source Number Column
User defined number of the data source.
Default Index Space Column
The Default Index Space property specifies the default index space for the physical data source. When indexes are dropped and created, they are created in this index space.
Num Parallel Indexes Per Table Column
Use this field to specify how many indexes are to be created in parallel for each table associated with the specified physical data source connection. For example, if a table with 30 indexes is the only task running in an execution plan at a given time, and you specify that 10 indexes are to be created in parallel, 10 of the 30 indexes will be created in parallel.
The number of indexes that can be created in parallel is limited by the value you set in the Num Connections per EP property.
The NumParallel Indexes Per Table property specifies the number of parallel indexes DAC will create for all tables associated with a specified physical data source connection. In contrast, you can use the Parallel Indexes subtab of the Physical Data Sources tab to specify the number of parallel indexes DAC will create at the table level.
JDBC Driver (Optional)
Specifies a JDBC driver for the data source connection. The value in this field must conform to the database specifications.
URL (Optional)
Specifies a JDBC URL for the data source connection. The value in this field must conform to the database specifications.
Lists the values for the Analyze Frequencies attribute by table type.
Enables you to specify the following extended properties for the physical data source selected in the top pane:
Connection pool name is used to register the BI Server as a data source (applies only to Oracle Fusion Applications). For instructions on integrating DAC and the BI Server, see "Integrating DAC and Oracle BI Server".
Event delay is used to configure the extracts for the different data sources in a multi-source environment to occur independently. For instructions on setting up an event delay, see "Setting Up Extract Delays, Event Delays and Data Source Notifications".
Data source usage notifications is used to initiate email notifications about data source usage and to define custom SQL to be executed based on the usage. For instructions on setting up notifications, see "Setting Up Data Source Usage Notifications".
Time Difference Override is used to specify the time difference (in minutes) between the DAC Server machine and the physical data source database machine (if the two machines reside in different time zones). This value is required for certain data sources, such as Oracle Fusion Applications and other unknown data sources, in order for DAC to determine refresh dates.
DAC updates refresh dates for all the source (primary and auxiliary) and target tables after an ETL process runs successfully, based on the ETL start time of the transactional system's database. If the DAC Server machine and the transactional system's database are in different time zones (adjusted to the timestamp of the transactional system's database), DAC automatically computes the time difference between the DAC Server machine and the transactional system database, except for Oracle Fusion Applications and other unknown data sources. For a detailed description of refresh dates, see "About Refresh Dates and DAC's Incremental Load Strategy".
For Oracle Fusion Applications and other unknown data sources, if the DAC Server machine and the transactional system's database are in different time zones, you must set the Time Difference Override extended property to specify the time difference between the DAC Server machine and the transactional system's database. Once the override is provided, DAC does not issue a SQL to figure out the current timestamp; instead, DAC uses the value for this override to compute the timestamp for refresh date purposes.
For instructions on setting the override extended property for Oracle Fusion Applications source system, see "Integrating DAC and Oracle BI Server".
For unknown data sources (such as those for which DAC cannot figure out the current timestamp), in the Physical Data Sources tab of the Setup view, set the Connection Type to Flat File, and then define the Time Difference Override extended property in the Extended Properties subtab of the Physical Data Sources tab. Set the value for this property to specify the time difference between the DAC Server machine and the transactional system's database.
Name Column
The name of the DAC extended property.
Value Column
The definition for the extended property.
Enables you to specify index spaces for indexes by table type. For instructions, see "Specifying Index Spaces for Indexes by Table Type".
Table Type Column
Table type for which you want to specify a tablespace.
Index Space Column
Specifies the name of the index space.
Note:
You must create the index space on the database before you specify an index space in DAC.
Enables you to specify how many indexes can be created in parallel for a given table.
Note: Use this property to specify the number of parallel indexes DAC will create for a specific table. Use the Num Parallel Indexes Per Table property in the Physical Data Sources tab to specify the number of parallel indexes DAC will create for all tables associated with a specified physical data source connection.
For more information, see "Specifying How Many Indexes Can Be Created in Parallel".
Name Column
Name of the table on which the indexes will be created.
Number of Parallel Indexes Column
Number of indexes that can be created in parallel for the specified table.
During an ETL process, this date is captured for all target tables and source tables of the type primary and auxiliary. DAC uses this date in the change capture process, during parameter generation, when choosing between full and incremental loads, and when deciding whether to truncate a table. (Does not apply to micro ETL processes.)
For more information about refresh dates, see "About Refresh Dates and DAC's Incremental Load Strategy".
Note:
Refresh dates for micro ETL processes are captured in the Micro ETL Refresh Dates subtab of the Execution Plans tab.
Name Column
Name of source or target table.
Execution Plan Column
The name of the execution plan to which the source or target table belongs.
Refresh Date Column
The refresh date for the source or target table.
Analyze Date Column
Indicates when the table was analyzed.
Number of Rows Column
Valid for target tables only. Indicates the total number of rows in the table after the table has been loaded.
The Working Patches tab lists patches that have the status Open and that have not been applied to the repository. For more information, see "Working With DAC Metadata Patches".
Status Column
Status of the patch listed in the top pane. Possible values are the follow:
Open. A patch must have an Open status in order for objects to be added or removed. Open patches cannot be exported.
Closed. When the patch status is changed to Closed, the patch can no longer be edited. You can reopen a Closed patch in order to edit it. Closed patches can be exported.
Patch Version Column
Patch version of the patch. The DAC automatically assigns the patch version value as 1 and the status as Open. When you change the status from Closed to Open, the patch version is automatically incremented by 1.
Repository Created Time Column
Local time of the machine that hosts the DAC repository when a patch is created.
Last Closed Column
Timestamp of when the patch was last closed.
Last Exported Column
Timestamp of when the patch was last exported.
Other Instructions Column
Provides a text box where you can enter information and instructions related to the patch.
Deleted From
Starting timestamp of the period for which you want to collect objects deleted from the source.
Deleted To
Ending timestamp of the period for which you want to collect objects deleted from the source.
Lists the child patches that have been added to the patch selected in the top pane. For instructions on adding child patches to a patch, see "Creating a DAC Metadata Patch".
Added to Patch Version Column
Patch version of the patch to which the child patch was added.
Date Added Column
Timestamp of when the child patch was added to the patch.
Status Column
Status of the child patch. Possible values are:
Completed. The patch was imported without any errors.
Failed. An error occurred when the patch was imported, and the process failed.
Not Applied. Either the container does not exist or the user does not have privilege to save the object in the container.
Comments Column
Provides a text box where you can enter comments related to the patch.
Lists the parent and child objects that have been added to the patch selected in the top pane. For instructions on adding contents to a patch, see Creating a DAC Metadata Patch.
Type Column
Type of object added as contents to the patch.
Container Column
Parent container of the object added to the patch.
Added to Patch Version Column
The patch version of the patch to which the child patch was added.
Date Added Column
Timestamp of when the object was added to the patch.
Comments Column
Provides a text box where you can enter comments related to the patch.
Source System Column
Name of the source system the object belongs to.
Lists details of the history of the selected patch.
Patch Version Column
The patch version of the patch selected in the top pane.
User Column
The user who performed the action.
Client Start Time Column
Timestamp of the client machine when the action started.
Repository Start Time Column
Timestamp of the DAC repository when the action started.
Repository End Time Column
Timestamp of the DAC repository when the action ended.
Action Column
Action that was performed.
Client Host Column
Name of the client host machine.
Client IP Column
IP address of the client machine.
The Execute View provides access to functionality that enables you to run, schedule, and monitor execution plans.
This section describes the Execute view interface tabs.
The Current Runs tab displays a list of queued, running, and failed current ETL processes. This list includes comprehensive information about each process. Once an ETL process completes, it is accessible from the Run History tab.
Execution Plan Name Column
The execution plan whose runtime instance is this record. This field is read only.
Run Status Column
The status of the run. The possible values are the following.
Value | Description |
---|---|
Completed |
All tasks have completed without errors. |
Failed |
Tasks were executed but encountered a problem or were killed or aborted. |
Not Executed |
Tasks were not able to run before the DAC Server stopped. |
Paused |
Task group members are waiting for the other tasks in the group to be executed. |
Queued |
Tasks for which the Depends On tasks are not yet completed. |
Requeued |
Tasks with a previous Failed status are submitted again. Tasks will start running as soon as an Informatica slot is available. |
Runnable |
Tasks for which the Depends On tasks have completed and are ready to be run but are waiting for an Informatica slot to be available. |
Running |
Tasks for which the Depends On tasks have been completed, have gotten an Informatica slot, and are being executed. |
Stopped |
Tasks for which one or more Depends On tasks have failed. |
Start Timestamp Column
Start time of the ETL process. Reflects the start time of every ETL attempt. For example, if the ETL fails and is run again, it gets a new start timestamp. The history of attempted runs is maintained in the audit trail for the run. This field is read only.
End Timestamp Column
End time of the ETL process. Reflects the end time of every ETL attempt. For example, if the ETL fails and is run again, it gets a new start timestamp. The history of attempted runs is maintained in the audit trail for the run. This field is read only.
Duration Column
A calculated field that shows the difference between start and end time stamps.
Status Description Column
Displays messages generated during run time. You can add notes to this field for Completed runs.
Process ID Column
ID for the process. This value is generated by DAC and is calculated based on the timestamp of the DAC repository database. This field is read-only.
Total Number of Tasks Column
The total number of tasks for this run. This field is read only.
Number of Failed Tasks Column
The sum total of tasks that have failed and that have stopped. This field is read only.
Number of Successful Tasks Column
The number of tasks whose status is Completed. This field is read only.
Number of Tasks Still in Queue Column
The number of tasks whose prerequisite tasks have not completed, and the number of tasks whose prerequisite tasks are completed and are waiting for resources. This field is read only.
Schedule Name Column
The name of the scheduled ETL process.
OS Process Name Column
The system dependent process name for tasks that run as separate processes, such as occurs when execution plans run concurrently.
The Audit Trail (RO) subtab is a read-only tab that provides the history of the selected run.
Last Updated Column
The date the selected run was last updated.
Start Timestamp Column
Start time of the selected run.
End Timestamp Column
End time of the selected run.
Duration Column
The difference between the start timestamp and the end timestamp of the selected run.
Status Column
Status of the selected run.
The Summary (RO) subtab provides a summary (based on dynamic SQL) of the selected ETL run.
Task Phase Column
The task phase of the selected ETL run.
Start Time Column
Start time of the selected phase of the run.
End Time Column
End time of the selected phase of the run.
Duration Column
The difference between the start timestamp and the end timestamp of the selected phase of the run.
Source System Column
The source system associated with the ETL process.
The Run Type Summary (RO) subtab is a read-only tab that indicates the number of task details by the execution type.
The Tasks subtab displays runtime instances of the tasks. As the execution proceeds, the tasks are executed based on the dependency rules and some prioritization.
As tasks complete, the tasks that depend on the completed tasks are notified and once their dependencies are completed, they become eligible to run. If a task fails, the administrator can address the failure and then requeue the task or mark it as Completed. The DAC Server polls for any changes in the failed task's detail status. If a failed task detail is queued, the task itself gets back into the ready-to-run queue and all its dependent tasks get into the queued status.
The rules of the prioritization are as follows:
Tasks with no dependencies are executed first.
If a task has failed and has been requeued, it gets the maximum priority.
Tasks with greater phase priorities are executed next. When several tasks of the same phase are eligible to run, the tasks with greater task priorities are executed next. The prioritization is also based on the number of dependent tasks, the number of source tables, and the average time taken by a task.
Depth Column
The level of the task's dependency. Tasks that have no dependencies are depth 0. Tasks that depend on other tasks of depth 0 are depth 1, and so on
Name Column
The task name.
Mode Column
Indicates whether the task will load in full or incremental load.
Group Column
Indicates the name of the task group, if the task belongs to one.
Task Status Column
Possible task statuses are the following:
Queued. Task is waiting for one or more predecessor tasks to complete. Appears as light yellow.
Runnable. Task is waiting on a resource token to be available. All predecessor tasks have completed. Appears as yellow.
Waiting. Task is eligible to run but is waiting because a time delay for the source was defined in the connectivity parameters or because an event delay was defined at the data source level (see "Setting Up Extract Delays, Event Delays and Data Source Notifications"). Appears as white.
Running. Task obtained a resource token and has started running. Appears as blue.
Paused. Task is a member of a task group and has paused until the child tasks of the task group are completed. Appears as blue.
Completed. All task details of the task have executed successfully. Appears as green.
Failed. One or more of the task details of the task have failed. Appears as red.
Stopped. Task has stopped because one or more predecessor tasks failed. Appears as ochre.
Not Executed. Task has a heuristics definition that prevents it from running (see "Using Heuristics to Manage Tasks, Tables and Indexes"). Appears as white.
Start Timestamp Column
Timestamp of when the task started.
End Timestamp Column
Timestamp of when the task ended.
Duration Column
Indicates how long the task ran.
Status Description Column
Description of the status of the task. For example, if the task failed, this field will indicate the cause of the failure.
Source Database Column
Name of the source database.
Target Database Column
Name of the target database.
Folder Name Column
Name of the Informatica folder in which the task resides
Task Phase Column
Task phase of the ETL process. The DAC Server uses the task phase to prioritize tasks and to generate a summary of the time taken for each of the phases
Execution Type Column
Execution type of the task.
Rows Successfully Passed Column
Number of rows that were successfully entered into the data warehouse.
Failed Rows Column
Number of rows that failed.
Source System Column
Source system container from which the task extracts data.
The Task Details subtab opens in query mode. It enables you to query for tasks associated with the selected ETL run in order to view execution details.
The Execution Instances tab enables you to create multiple instances (or copies) of an execution plan. You can then run the parent execution plan and the additional child instances concurrently. For more information about execution instances, see "Running Multiple Instances of an Execution Plan".
Parent Execution Plan Column
Name of the parent execution plan that the execution instance belongs to.
Instance Code Column
An alphanumeric code of one to five characters that is used to identify each child instance. This code is used as the prefix for the Informatica run instance name. This unique identifier enables DAC and Informatica to distinguish between workflow run instances that are running concurrently.
Full Load Always Column
Indicates the specified execution plan will execute a full load regardless of whether the source and target tables have refresh dates.
Micro ETL Column
Indicates an execution plan is a micro ETL execution plan, which is an execution plan that you schedule at very frequent intervals, such as hourly or half-hourly. Refresh dates for micro ETL execution plans are kept separate from the refresh dates for regular execution plans. DAC saves refresh dates for micro ETL execution plans in the Micro ETL Refresh Dates subtab of the Execution Plans tab (Execute view) and refresh dates for regular execution plans in the Refresh Dates subtab of the Physical Data Sources tab (Setup view).
For more information about micro ETL processes, see "About Micro ETL Execution Plans".
Last Designed Column
Indicates the date and time this execution plan was last designed.
Analyze Column
Indicates the tables associated with this execution plan will be analyzed.
Analyze Truncated Tables Only Column
Indicates only truncated tables will be analyzed.
Drop/Create Indices Column
Indicates indexes of the tables associated with this execution plan will be dropped and created.
Overrides Column
Displays the properties that are different between the parent execution plan and the execution instance. The following properties can be customized when creating an execution instance:
Physical data source.
Physical folder.
Context connectivity parameter for external executors. For more information about external executors, see Chapter 14, "Integrating DAC With Other ETL Tools."
The All Dependencies subtab opens in query mode. It enables you to query for tasks that have a dependent relationship. The columns in this tab are the same as those in the Immediate Dependencies subtab.
Note: The All Dependencies information is not subject to export and, therefore, not importable elsewhere. However, if you want to compute all dependencies, you can do so by right-clicking and selecting Compute All Dependencies.
Depth Column
The level of the task's dependency. Tasks that have no dependencies are depth 0. Tasks that depend on other tasks of depth 0 are depth 1, and so on.
Source Column
Source table for the task.
Target Column
Target table for the task.
Successor Depth Column
Depth of the successor task.
Successor Name Column
Name of the successor task.
Successor Source Column
Source table of the successor task.
Successor Target Column
Target table of the successor task.
You can configure DAC to run multiple execution plans concurrently if the execution plans are independent of one another, that is, as long as the execution plans do not load data into the same table on the same physical data source. To avoid data inconsistency, execution plans that have dependencies with other execution plans cannot run concurrently (unless you override the default behavior).
The Concurrent Dependency subtab enables you explicitly to define execution plans as dependent or independent in relation to other execution plans.
For more information, see "Explicitly Defining Execution Plans as Independent or Dependent".
The Connectivity Parameters subtab lists the parameters of the selected execution plan for database connections and Informatica folders.
Copy Number Column
Number of the data source.
Type Column
Possible values are the following:
Folder
Indicates an Informatica folder.
Datasource
Indicates a database connection parameter.
Name Column
Logical name of the folder or database connection.
Value Column
Physical name of the folder or database connection.
Delay Column
Indicates how many minutes an extract of a data source will be delayed after the first extract of a multiple source extract process has started. For more information about multi-source execution plans, see "Setting Up Extract Delays".
Prune Time Column
The Prune Time setting subtracts the number of prune minutes from the LAST_REFRESH_DATE and supplies this value as the value for the $$LAST_EXTRACT_DATE parameter. This property must be set based on experience with processes, such as remote sync, that potentially can cause records to be missed.
Note: Setting the Prune Time period too high can impact performance.
Source System Column
Name of the source system associated with the parameter.
The Execution Parameters subtab enables you to configure parameters at the execution plan level.
For information about how to configure parameters, see Chapter 8, "Defining and Managing Parameters."
Data Type Column
Parameter data type. See "Parameter Data Types" for a description of the different parameter data types.
Load Type Column
Type of load (full, incremental, or both) during which the execution instance will run.
Value Column
User-defined parameter value.
Comments Column
Text field for entering comments.
The Immediate Dependencies subtab opens in query mode. It enables you to query for tasks that have an immediate dependent relationship between tasks that are generated during the automatic task generation process.
Depth Column
The level of the task's dependency. Tasks that have no dependencies are depth 0. Tasks that depend on other tasks of depth 0 are depth 1, and so on.
Task Name Column
Name of immediate dependent task.
Source Column
Source table from which the task extracts data.
Target Column
Target table into which the task loads data.
Successor Depth Column
Depth of the successor task.
Successor Name Column
Name of the successor task.
Successor Source Column
Source table of the successor task.
Successor Target Column
Target table of the successor task.
Displays refresh dates for micro ETL execution plans (indicated by selecting the Micro ETL check box in the Execution Plans tab). For more information about micro ETL execution plans, see "About Micro ETL Execution Plans".
Connection Column
Logical name for the database connection.
Refresh Date Column
Last refresh time of the execution plan. This applies only when separate refresh dates are maintained. Used for micro ETL processing.
The Ordered Tasks subtab lists tasks associated with the selected execution plan and the order in which they can be executed.
Depth Column
The level of the task's dependency. Tasks that have no dependencies are depth 0. Tasks that depend on other tasks of depth 0 are depth 1, and so on.
Group Column
Indicates the name of the task group, if the task belongs to one.
Primary Source Column
Primary source table from which the task extracts data.
Primary Target Column
Primary target table into which data is loaded.
Folder Name Column
Name of the Informatica folder in which the task resides.
Task Phase Column
Task phase of the ETL process. The DAC Server uses the task phase to prioritize tasks and to generate a summary of the time taken for each of the phases.
Execution Type Column
The execution type of the task. For more information about execution types see Execution Type Column.
Command for Full Load
Command for full load that is associated with the task.
Command for Incremental Load Column
Command for incremental load that is associated with the task.
Source System Column
Source system container from which the task extracts data.
The Subject Areas subtab lists the subject areas associated with the selected execution plan. You can also add subject areas to or remove subject areas from the selected execution plan.
For instructions on adding and removing subject areas, see:
Subject Area Column
Name of the subject area associated with the execution plan.
Source System Column
The source system container associated with the subject area.
The Tables (RO) subtab is a read-only tab that lists all the tables that are touched by the selected execution plan.
The Execution Plans tab enables you to view, edit, and create execution plans. For more information, see "Building and Running Execution Plans".
Full Load Always Column
Indicates the specified execution plan will execute a full load regardless of whether the source and target tables have refresh dates.
Micro ETL Column
Indicates an execution plan is a micro ETL execution plan, which is an execution plan that you schedule at very frequent intervals, such as hourly or half-hourly. Refresh dates for micro ETL execution plans are kept separate from the refresh dates for regular execution plans. DAC saves refresh dates for micro ETL execution plans in the Micro ETL Refresh Dates subtab of the Execution Plans tab (Execute view) and refresh dates for regular execution plans in the Refresh Dates subtab of the Physical Data Sources tab (Setup view).
For more information about micro ETL processes, see "About Micro ETL Execution Plans".
Last Designed Column
Indicates the date and time this execution plan was last designed.
Analyze Column
Indicates the tables associated with this execution plan will be analyzed.
Analyze Truncated Tables Only Column
Indicates only truncated tables will be analyzed.
Drop/Create Indices Column
Indicates indexes of the tables associated with this execution plan will be dropped and created.
Run Now Button Column
The Run Now button submits a request to the DAC Server to execute the execution plan.
Build Button
The Build button does the following:
Collects all the tasks from all the subject areas.
Substitutes the logical values with the physical values as defined in the Connectivity Parameters subtab (primary source/target connection and folder information).
Removes any redundancies based on the task name and primary source/target connection information.
Creates multiple instances of the same task to accommodate the number of copies of the parameters.
Comes up with the ordered list of tasks with the dependencies computed among them.
For more information about building an execution plan, see "Building and Running Execution Plans".
The All Dependencies subtab opens in query mode. It enables you to query for tasks that have a dependent relationship. The columns in this tab are the same as those in the Immediate Dependencies subtab.
Note: The All Dependencies information is not subject to export and, therefore, not importable elsewhere. However, if you want to compute all dependencies, you can do so by right-clicking and selecting Compute All Dependencies.
Depth Column
The level of the task's dependency. Tasks that have no dependencies are depth 0. Tasks that depend on other tasks of depth 0 are depth 1, and so on.
Source Column
Source table for the task.
Target Column
Target table for the task.
Successor Depth Column
Depth of the successor task.
Successor Name Column
Name of the successor task.
Successor Source Column
Source table of the successor task.
Successor Target Column
Target table of the successor task.
You can configure DAC to run multiple execution plans concurrently if the execution plans are independent of one another, that is, as long as the execution plans do not load data into the same table on the same physical data source. To avoid data inconsistency, execution plans that have dependencies with other execution plans cannot run concurrently (unless you override the default behavior).
The Concurrent Dependency subtab enables you explicitly to define execution plans as dependent or independent in relation to other execution plans.
For more information, see "Explicitly Defining Execution Plans as Independent or Dependent".
The Connectivity Parameters subtab lists the parameters of the selected execution plan for database connections and Informatica folders.
Copy Number Column
Number of the data source.
Type Column
Possible values are the following:
Folder
Indicates an Informatica folder.
Datasource
Indicates a database connection parameter.
Name Column
Logical name of the folder or database connection.
Value Column
Physical name of the folder or database connection.
Delay Column
Indicates how many minutes an extract of a data source will be delayed after the first extract of a multiple source extract process has started. For more information about multi-source execution plans, see "Setting Up Extract Delays".
Prune Time Column
The Prune Time setting subtracts the number of prune minutes from the LAST_REFRESH_DATE and supplies this value as the value for the $$LAST_EXTRACT_DATE parameter. This property must be set based on experience with processes, such as remote sync, that potentially can cause records to be missed.
Note: Setting the Prune Time period too high can impact performance.
Source System Column
Name of the source system associated with the parameter.
The Execution Instances subtab (RO) is a read only list of the execution plan's child instances.
The Execution Parameters subtab enables you to configure parameters at the execution plan level.
For information about how to configure parameters, see Chapter 8, "Defining and Managing Parameters."
Data Type Column
Parameter data type. See "Parameter Data Types" for a description of the different parameter data types.
Load Type Column
Type of load (full, incremental, or both) during which the execution instance will run.
Value Column
User-defined parameter value.
Comments Column
Text field for entering comments.
Enables you to add tasks that will run after the ordered tasks of an execution plan run. Also lists all the user-specified preceding tasks for an execution plan.
Execution Priority Column
Indicates the order among the following tasks in which this task is executed.
Command for Incremental Load Column
Command associated with the task.
Source System Column
Source system container from which the task extracts data.
The Immediate Dependencies subtab opens in query mode. It enables you to query for tasks that have an immediate dependent relationship between tasks that are generated during the automatic task generation process.
Depth Column
The level of the task's dependency. Tasks that have no dependencies are depth 0. Tasks that depend on other tasks of depth 0 are depth 1, and so on.
Task Name Column
Name of immediate dependent task.
Source Column
Source table from which the task extracts data.
Target Column
Target table into which the task loads data.
Successor Depth Column
Depth of the successor task.
Successor Name Column
Name of the successor task.
Successor Source Column
Source table of the successor task.
Successor Target Column
Target table of the successor task.
The Ordered Tasks subtab lists tasks associated with the selected execution plan and the order in which they can be executed.
Depth Column
The level of the task's dependency. Tasks that have no dependencies are depth 0. Tasks that depend on other tasks of depth 0 are depth 1, and so on.
Group Column
Indicates the name of the task group, if the task belongs to one.
Primary Source Column
Primary source table from which the task extracts data.
Primary Target Column
Primary target table into which data is loaded.
Folder Name Column
Name of the Informatica folder in which the task resides.
Task Phase Column
Task phase of the ETL process. The DAC Server uses the task phase to prioritize tasks and to generate a summary of the time taken for each of the phases.
Execution Type Column
The execution type of the task. For more information about execution types see Execution Type Column.
Source System Column
Source system container from which the task extracts data.
Details Button on Subtab Toolbar
The Details command displays detailed information about the task, including predecessor and successor tasks, and source, target, conditional, and refresh date tables.
Unit Test Button on Subtab Toolbar
The Unit Test command provides a summary of the details of what gets executed when the particular task runs as part of the execution plan. It includes the following:
Pre-ETL actions
Upon Failure Restart actions
Truncate table information
List of indexes to be dropped and created
Full or incremental command based on the current situation
Tables to be analyzed
Upon Success action
Upon Failure action
Enables you to add tasks that will run before the ordered tasks of an execution plan run. Also lists all the user-specified preceding tasks for an execution plan. For instructions on adding preceding tasks to an execution plan, see "Building and Running Execution Plans".
Name Column
Name of task.
Execution Priority Column
Indicates the order in which the task is executed. If two or more tasks have the same priority, DAC will execute them in parallel.
Command for Incremental Load Column
Command for incremental load that is associated with the task.
Source System Column
Source system container from which the task extracts data.
Displays refresh dates for micro ETL execution plans (indicated by selecting the Micro ETL check box in the Execution Plans tab). For more information about micro ETL execution plans, see "About Micro ETL Execution Plans".
Connection Column
Logical name for the database connection.
Refresh Date Column
Last refresh time of the execution plan. This applies only when separate refresh dates are maintained. Used for micro ETL processing.
The Subject Areas subtab lists the subject areas associated with the selected execution plan. You can also add subject areas to or remove subject areas from the selected execution plan.
For instructions on adding and removing subject areas, see:
Subject Area Column
Name of the subject area associated with the execution plan.
Source System Column
The source system container associated with the subject area.
The Tables (RO) subtab is a read-only tab that lists all the tables that are touched by the selected execution plan.
Type Column
Table type.
Connection Column
Database connection.
Table Column
Table name.
The Run History tab displays information about completed ETL processes. The information displayed in the top and bottom windows is the same as that in the Current Runs tab.
For a description of the information in the Run History tab, see "Current Runs Tab".
See "Current Runs Tab: Audit Trail (RO) Subtab".
See "Current Runs Tab: Phase Summary (RO) Subtab".
See "Current Runs Tab: Run Type Summary (RO)".
See "Current Runs Tab: Tasks Subtab".
The Task Details subtab opens in query mode. It enables you to query for tasks associated with the selected ETL run in order to view execution details.
The Scheduler tab enables you to schedule ETL processes to be executed either once or periodically. When you schedule an ETL or make changes to a schedule, the DAC Server picks up the information from the DAC Client. The DAC Server polls the DAC repository for changes periodically at a frequency set in the DAC System Properties tab.
The top pane of the Scheduler tab lists ETL runs that have been scheduled. The bottom window enables you to schedule an ETL run.
For instructions on scheduling ETL processes, see "Scheduling an Execution Plan".
Execution Plan Column
The name of the scheduled execution plan.
Last Schedule Status Column
The last run status of the scheduled ETL process. Possible values are Running, Completed or Stopped.
Next Trigger Column
Time the scheduled ETL run will next be executed.
Status Description Column
Description of the last ETL run. Possible values are Running, Completed, or the reason the process stopped.
Recurrence Column
Indicates how often the schedule will be executed.