Task Component Parameters

Components are individual functional units that are put together to form a process. Task Component Parameters reflect the parameters that are being applied to the selected task. Each component triggers its own set of processes in the back-end to achieve the final output.

The parameters required for each of the component ID’s are as tabulated.

 

NOTE

The FIRERUN Component in ICC is not supported.

 

Component: AGGREGATE DATA

Property

Description

Cube Parameter

Refers to the cube identifier as defined through the Business Metadata (Cube) menu option. Select the cube code from the drop-down list.

Operation

Select the operation to be performed from the drop-down list. The available options are ALL, GENDATAFILES, and GENPRNFILES.

Optional parameters

Refers to the additional parameter that has to be processed during runtime. You can specify the runsk value that should be processed as a runtime parameter during execution. By default, the value is set to “null”.

 

Component: CREATE CUBE

Field

Description

Cube Parameter

Refers to the cube identifier as defined through the Business Metadata (Cube) menu option. Select the cube code from the drop-down list.

Operation

Refers to the operation to be performed. Select the required Operation from the drop-down list. The options are:

·       ALL – This option will execute BUILDDB and DLRU.

·       BUILDDB – This option should be used to build the outline in Essbase Cube. The outline is built based on the parentage file(s) contents.

·       TUNEDB – This option should be used to analyze data and optimize cube settings. For example, if you are trying to achieve the best block size, where 64K bytes is the ideal size.

·       PROCESSDB – This option will execute BUILDDB and DLRU, and is same as All option. Selecting this option will internally assign as ALL.

·       DLRU – This option should be used to Load Data in the Essbase Cube and trigger a Rollup.

·       ROLLUP – ROLLUP refers to populating data in parent nodes based on calculations (E.g. Addition). This option should be used to trigger just the ROLLUP option where in the CALC scripts are executed. The same is applicable for DLRU option also.

·       VALIDATE – This option will validate the outline.

·       DELDB – This option will delete the Essbase cube.

·       OPTSTORE – This option will create the Optimized outline for the cube.

 

Component: EXTRACT DATA

Field

Description

Source Name

Select the source from which the extract you want to execute is derived, from the drop-down list.

Sources defined from the Source Designer window of Data Management Tools are displayed in the drop-down list.

Extract Name

Select the required extract name from the drop-down list. The list displays the Data Mapping definitions (T2F and H2F) defined on the selected source, from the Data Mapping window.

Default Value

 

 

Component: LOAD DATA

Field

Description

Load Mode

Select the load mode from the drop-down list. The options are Table to Table and File to Table.

Table to Table should be selected for Data Mapping definitions such as T2T, T2H, H2T, H2H and L2H definitions.

File to Table should be selected for Data Mapping definitions such as F2T and F2H definitions.

Source Name

Select the required source on which the Data Mapping or Data File Mapping definition you want to execute is defined, from the drop-down list.

Based on the selection of Load Mode, the list displays the corresponding sources.

File Name

Select the Data Mapping or Data File Mapping definition you want to execute, from the drop-down list. Based on the selected Load Mode and Source Name, the list displays the corresponding definitions.

Data File Name

The data filename refers to the .dat file that exists in the database. Specifying Data File Name is mandatory for Load Mode selected as File to Table and optional for Load Mode selected as File to Table. If the file name or the .dat file name is incorrect, the task fails during execution.

In case of L2H, you can specify the WebLog name.

Default Value

Used to pass values to the parameters defined in Load Data Definition.

You can pass multiple runtime parameters while defining a batch by specifying the values separated by ‘comma’.

For example, $MIS_DATE=value,$RUNSKEY=value,[DLCY]=value and so on.

Note the following:

·       The parameters can either be specified with $ or within [ ]. For example, $RUNSKEY=value or [RUNSKEY]=value. When the definition is saved from the UI, no value is assigned to these parameters and these are just passed for syntax correctness only. Actual values will be passed to these parameters while defining an ICC batch or a RUN.

The list of valid Default Parameters are:

·       RUNID- Data type is String and can be mapped to VARCHAR2

·       PHID- Data type is String and can be mapped to VARCHAR2

·       EXEID- Data type is String and can be mapped to VARCHAR2

·       RUNSK- Data type is Integer and can be mapped to VARCHAR2 or INTEGER.

·       SYSDATE- Data type is Date and can be mapped to DATE, VARCHAR2.

·       TASKID- Data type is String and can be mapped to VARCHAR2

·       MISDATE- Data type is Date and can be mapped to DATE, VARCHAR2.

·       BATCHRUNID- Data type is String and can be mapped to VARCHAR2

Note: RUNID, PHID, EXEID, RUNSK, MISDATE and BATCHRUNID are implicitly passed through RRF. Rest must be explicitly passed.

·       EXEC_ENV_SOURCE- This parameter is used to replace an External Data source or Infodom based Data Source of the T2T, T2H, H2T or H2H definition during run time, provided the structure of the source in the mapping definition is same as that of the replacing source. Hence you can convert a T2T definition into H2T or T2H into H2H and so on. If the resultant definition is T2T, then T2Texecution using CPP engine is not supported.

For external Data Source, prefix it with ‘EXT.’ and for Infodom based sources, prefix it with ‘INF.’. For example, [EXEC_ENV_SOURCE]=EXT.<newSourceName>

or

[EXEC_ENV_SOURCE]=INF.<newSourceName>

Additionally, Cluster properties of the current logged-in Infodom will be considered for the execution of the Data Mapping definition.

·       EXEC_ENV_SOURCE_OWNER_INFODOM –This parameter is used to specify the Infodom where the Data Source being replaced (<newSourceName>) was created, in case that Infodom is different from the current Infodom where the batch is executed. If this is not provided, it will look for the Data Source in the current Infodom and may result in failed execution. 

·       EXEC_ENV_TARGET- This parameter is used to replace the target Infodom of the T2T, T2H, H2T or H2H definition during run time, provided the structure of the target in the mapping definition is same as that of the replacing target. Hence you can convert a T2T definition into T2H or H2T into H2H and so on. But if the resultant definition is T2T, then T2Texecution using CPP engine is not supported.

For example, [EXEC_ENV_TARGET]=newTargetName

Also, DMT Configurations and Cluster properties of the new target Infodom will be considered for the execution of the Data Mapping definition.

Note: You can use both EXEC_ENV_SOURCE and EXEC_ENV_TARGET together as well. Only limitation is, if the resultant definition is T2T, execution using CPP engine is not supported.

Note: If you are converting a mapping definition to T2H using EXEC_ENV_SOURCE/EXEC_ENV_TARGET, there is no provision in UI to specify the Split By Column/Generic Options. In such scenarios, execution via Sqoop may fail, when the split by column is defaulted to a string/date column.

·       EXECUTION_ENGINE_MODE- This parameter is used to execute H2H on Spark. For example, [EXECUTION_ENGINE_MODE]=SPARK

·       CLOSE_SPARK_SESSION- This parameter is used to close the Spark session after executing the last H2H-Spark task in the batch.

In a batch execution, a new Spark session is created when the first H2H-Spark task is encountered, and the same Spark session is reused for the rest of the H2H-Spark tasks in the same run. For the Spark session to close at the end of the run, user needs to set the CLOSE_SPARK_SESSION to YES in the last H2H-spark task in the batch.

For example, [CLOSE_SPARK_SESSION]=YES

·       SRCHINT- This parameter is used to provide Source Hints. For example, [SRCHINT]= FIRST_ROWS(2)

Note that the value should not contain  /*+   */. Only the content should be given.

·       SRCPRESCRIPT- This parameter is used to provide Source Prescript.

Note: ALTER keyword is not supported.

·       TARGETHINT- This parameter is used to provide Target Hints. For example, [TARGETHINT]= FIRST_ROWS(2)

Note that the value should not contain  /*+   */. Only the content should be given.

·       TARGETPRESCRIPT- This parameter is used to provide Target Prescript.

Note: ALTER keyword is not supported.

Apart from these, L2H/H2H/T2H/H2T/F2H data mappings also support following additional default parameters. Values for these are implicitly passed from ICC/RRF.

·       $MISDT_YYYY-MM-DD - Data type is String and can be mapped to VARCHAR2. Value will be the MISDATE in ‘yyyy-MM-dd‘ format.

·       $MISYEAR_YYYY - Data type is String and can be mapped to VARCHAR2. Value will be the year value in ‘yyyy‘ format from MISDATE.

·       $MISMONTH_MM - Data type is String and can be mapped to VARCHAR2. Value will be the month value in ‘MM‘ format from MISDATE.

·       $MISDAY_DD - Data type is String and can be mapped to VARCHAR2. Value will be the date value in ‘dd‘ format from MISDATE.

·       $SYSDT_YYYY-MM-DD- Data type is String and can be mapped to VARCHAR2. Value will be the System date in ‘yyyy-MM-dd‘ format.

·       $SYSHOUR_HH24 - Data type is String and can be mapped to VARCHAR2. Value will be the hour value in ‘HH24‘ format from System date.

Note: The aforementioned parameters are not supported for T2T AND F2T.

·       Only those variable which start with $ or [, will be replaced at run time and the value of this variable will be equal to anything starting after “=” and ending before comma “,”.

For example, if $DCCY/[DCCY] =’USD’, $RUNSKEY=1, then the replaced value in query for $DCCY will be ‘USD’ and for $RUNSKEY will be 1.

·       If you are using “RUNSKEY” parameter in ICC Batch, then ensure that you specify the value of it instead of specifying $RUNSKEY / [RUNSKEY]. For example, FCT_STANDARD_ACCT_HEAD.N_RUN_SKEY=’$RUNSKEY’. Since the value of RUNSKEY will not be replaced during runtime.

·       If there are quotes specified in parameter name, then ensure not to use quotes while defining the expression or vice versa to avoid SQL errors. For example, if the parameter name is $DCCY=’USD’ and the expression is defined using ‘$DCCY’ instead of $DCCY, then the final value will be ‘ ‘USD’ ’.

·       When you execute a RUN, the run is always tagged with a RUNSK value (a unique value for each run fired directly from the RRF). You might have a DERIVED COLUMN in your T2T with expression like $RUNSK. If you execute this T2T through a RUN, a unique RUNSK value is passed implicitly to the T2T engine, which then assigns that value wherever $RUNSK is found. But if you try to execute the T2T through ICC, then you need to explicitly pass a $RUNSK as a parameter so that the T2T engine can use it.

Two additional parameters are now supported for L2H mappings:

·        [INCREMENTALLOAD] – Specify the value as TRUE/FALSE. If set to TRUE, historically loaded data files will not be loaded again (load history is checked against the definition name, source name, target infodom, target table name and the file name combination). If set to FALSE, the execution is similar to a snapshot load, and everything from the source folder/file will be loaded irrespective of load history.

·       [FOLDERNAME] – Value provided will be used to pick up the data folder to be loaded.

§       For HDFS based Weblog source: Value will be suffixed to HDFS File Path specified during the source creation.

§       For Local File System based Weblog source: By default the system will look for execution date folder (MISDATE: yyyymmdd) under STAGE/<source name>. If the user has specified the FOLDERNAME for this source, system will ignore the MISDATE folder and look for the directory provided as [FOLDERNAME].

Component: MODEL

 

Field

Description

Rule Name

Refers to the model that has to be processed. This is a system generated code that is assigned at the time of model definition.

Operation

The All definition for the Operation field conveys the process of extracting the data from the flat files and applying the run regression on the data extracted.

For Batches that are being built for the first time the data will be extracted from the flat files and the run regression will be applied on it.

Optional Parameters

Refers to the set of parameters specific to the model that has to be processed. This set of parameters is automatically generated by the system at the time of definition.

You must NOT define a Model using the Define mode under Batch Scheduling. You must define all models using the Modeling framework menu.

 

Component: PROCESS_EXECUTION

This component will combine all the rules to create single or multiple merge queries. Only rules defined on the same dataset can be merged. For creation of queries the current order of the rules inside the process or sub-process will be taken into consideration. Following validations are performed to determine single or multiple DMLs for merging Rules that is, validation on subsequent rules.

·       For classification-classification or classification-computation rule combination, the target column of the prior classification rule must not be used in any of the subsequent rules as source hierarchies in the executable process or sub-process. Also the same target hierarchy must not be used as a target in the subsequent rule.

·       For computation-computation rule combination, the target measures of the prior computation rule must not be used in any of the subsequent computation rules in the executable process or sub-process.

All the merge queries created after satisfying all the conditions will be executed in a single transaction.

 

NOTE

·       RRF framework cannot validate the semantic correctness of the rules grouped for merge. It is left to the application developer/user to make a conscious choice.

·       If the merge results in an ill-formed or runaway SQL, the framework will not be able to detect it at design time. This is again left to application developer/user to design the grouping that is syntactically valid.

 

Field

Description

Process Code

Display the codes of the RRF Processes defined under the selected Infodom. Select the required Process from the drop-down list.

Sub Process Code

Display the codes of the Sub Processes available under the selected Process. Select the required Sub Process from the drop-down list.

Build Flag

Select the required option from the drop-down list as “Yes” or “No”.

Build Flag refers to the pre-compiled rules, which are executed with the query stored in database. While defining a Rule, you can make use of Build Flag to fasten the Rule execution process by making use of existing technical metadata details wherein the rule query is not rebuilt again during Rule execution.

Built Flag status set to “No” indicates that the query statement is formed dynamically retrieving the technical metadata details. If the Build Flag status is set to “Yes” then the relevant metadata details required to form the rule query is stored in database on “Save” of a Rule definition. When this rule is executed, database is accessed to form the rule query based on stored metadata details, thus ensuring performance enhancement during Rule execution. For more information, refer Significance of Pre-Built Flag.

Optional Parameters

Refers to the set of parameters which would behave as filter criteria for the merge query.

 

Component: RULE_EXECUTION

Field

Description

Rule Code

Display the codes of the RRF Rules defined under the selected Infodom.

Build Flag

Select the required option from the drop-down list as “Yes” or “No”.

Build Flag refers to the pre-compiled rules, which are executed with the query stored in database. While defining a Rule, you can make use of Build Flag to fasten the Rule execution process by making use of existing technical metadata details wherein the rule query is not rebuilt again during Rule execution.

Built Flag status set to “No” indicates that the query statement is formed dynamically retrieving the technical metadata details. If the Build Flag status is set to “Yes” then the relevant metadata details required to form the rule query is stored in database on “Save” of a Rule definition. When this rule is executed, database is accessed to form the rule query based on stored metadata details, thus ensuring performance enhancement during Rule execution. For more information, refer Significance of Pre-Built Flag.

Optional Parameters

Refers to the set of parameters which would behave as filter criteria for the merge query.

 

Component: RUN DQ RULE

Property

Description

DQ Group Name

Refers to the Data Quality Groups consisting of associated Data Quality Rule definition(s). Select the required DQ Group from the drop-down list.

Rejection Threshold

Specify the percentage of Rejection Threshold (%) limit in numeric value. This refers to the maximum percentage of records that can be rejected in a job. If the percentage of failed records exceeds the Rejection Threshold, the job will fail. If the field is left blank, the default the value is set to 100%.

Additional Parameters

Specify the Additional Parameters as filtering criteria for execution in the pattern Key#Data type#Value; Key#Data type#Value;…etc.

Here the Data type of the value should be “V” for Varchar/Char, or “D” for Date with “MM/DD/YYYY” format, or “N” for numeric data. For example, if you want to filter some specific region codes, you can specify the Additional Parameters value as $REGION_CODE#V#US;$CREATION_DATE#D#07/06/1983;$ACCOUNT
_BAL#N#10000.50;

Note: In case the Additional Parameters are not specified, the default value is fetched from the corresponding table in configuration schema for execution.

Parameters

Comma separated parameters where first value is considered as the threshold percentage, followed by additional parameters which are a combination of three tokens. Example, “90”,”PARAM1”,”D”,”VALUE1”,”PARAM2”,”V”,”VALUE2”.

Note: Parameter ‘Fail if threshold is breached” is defaulted to “Yes” for RRF executions.

Optional Parameter

For DQ Rule execution on Spark, specify EXECUTION_VENUE=Spark in this field.

Note that, you should have registered a cluster from DMT Configurations > Register Cluster window with the following details:

·       Name- Enter name of the Hive information domain.

·       Description- Enter a description for the cluster.

·       Livy Service URL- Enter the Livy Service URL used to connect to Spark from OFSAA.

 

Component: RUN EXECUTABLE

Field

Description

Executable

Refers to the executable path on the DB Server. The Executable parameter contains the executable name as well as the parameters to the executable. These executable parameters have to be specified as they are specified at a command line. In other words, the Executable parameter is the exact command line required to execute the executable file.

The path to the executable has been entered in quotes. Quotes have to be used if the exe name has a space included in it. In other words, the details entered here should look exactly as you would enter it in the command window while calling your executable. The parameter value is case-sensitive. So, ensure that you take care of the spaces, quotes, and case. Also, commas are not allowed while defining the parameter value for executable.

To pass parameters like $RUNID, $PHID, $EXEID, $RUNSK to the RUN EXECUTABLE component, specify RRFOPT=Y or rrfopt=y along with other executable details.

Wait

When the file is being executed you have the choice to either wait till the execution is completed or proceed with the next task.

Select Y (Yes) or N (No) from the drop-down list.

·       Y- Select this if you want to wait for the execution to be completed

·       N- Select this if you wish to proceed.

If the task is using FICGEN/RUN EXECUTABLE component and there is no precedence set for this task, then the WAIT should always be set to 'N'. 

Batch Parameter

Y- Select Yes if you want to pass the Batch parameters to the shell script file being executed.

·       If Wait is selected as Y and Batch Parameter is selected as Y, following parameters are passed to the executable:

NIL <BatchExeRunID> <ComponentId> <Task> <Infodate> <Infodom> <DatstoreType> <IPAddress>

·       If Wait is selected as N and Batch Parameter is selected as Y, following parameters are passed to the executable:

<BatchExeRunID> <ComponentId> <Task> <Infodate> <Infodom> <DatstoreType> <IPAddress>

N- Select No if the Batch parameters should not be passed to the shell script.

Optional Parameters

This field will be considered only if you have specified RRFOPT=Y or rrfopt=y in the Executable field.

Specify the optional parameters that you want to pass to the executable. For example, $RUNID, $PHID, $EXEID, $RUNSK.

 

Component: SQLRULE

Field

Description

Folder

Refers to the location where the SQL Rule definition resides. Click the drop-down list box in the Value column to select the desired Folder.

SQL Rule Name

Refers to the defined SQL rule. Click the drop-down list in the Value column to select the SQL Rule.

 

Component: TRANSFORM DATA

Field

Description

Rule Name

Refers to the Data transformation name that was defined in the Post Load Changes window of Data Management Tools framework. Select the rule name from the drop-down list.

Parameter List

Is the list of parameters defined in Data Transformation check in which the parameters must be in the same order as in the definition and must be separated by a comma (“,”). Irrespective of the data type of the parameter defined in the procedure. The parameter specified through the front-end does not require to be specified within quotes (' ').

Note: Commas are used as delimiters for parameter values internally by the ICC Batch component. Ensure that commas are not used in any of the parameter values, that is, “a, b, c” should not be a parameter value in the list of parameter values being passed to the TRANSFORM DATA task. For example, if the parameter values to this task are required to be passed as (val1, val2, (a, b, c), val4), the correct way would be to pass these values as (val1, val2, (a*b*c), val4). You can use any other character as a separator.

 

Component: VARIABLE SHOCK

Field

Description

Variable Shock Code

Refers to the variable shock that has to be processed. This is a system generated code that is assigned at the time of variable shock definition.

Operation

Refers to the operation to be performed. Click the drop-down list in the Value field to select the Operation. The available options are ALL, GENDATAFILES, and GENPRNFILES.

Optional Parameters

Refers to Process ID and the User ID. Click in the text box adjacent to the Optional Parameters field and enter the Process ID and User ID.

 

Component: Workflow Execution

Field

Description

Object ID

Enter an object ID of your choice. This ID will appear as Entity ID in the Process Monitor window.

Workflow

Select the workflow you want to execute from the drop-down list. It displays all the workflows defined in the Process Modeller window.

Optional Parameters

Enter the value you want to pass to the Dynamic Parameters of the Run Task during the execution of the workflow.