5        Configuring DIH

This chapter helps you to configure DIH.

Topics:

·        Setting up ODI Connectivity

·        Refreshing Application Data Interface

·        Configuring System Parameters

·        Configuring External Data Sources

Setting up ODI Connectivity

One of the first tasks during DIH configuration for use is to register details of how it connects with Oracle Data Integrator (ODI). The application allows mandatory and optional settings to be captured for the purpose.

NOTE:   

It is assumed that ODI is installed, configured, and verified as per its documentation, before steps in this section are carried out.

 

From the Data Integration Hub Designer window, select Configure and then select Settings. This window captures the ODI setup information.

Figure 6: Settings Mandatory Tab

 

This illustration captures the ODI set up information.

Madatory Settings

To capture or edit the mandatory settings, follow these steps:

1.     Edit the ODI User details.

2.     Enter the field details as explained in Fields in the Mandatory Settings section.

3.     To validate details specified, click Test Connection.

4.     The Validation Summary dialog box displays the status of the following details:

§       ODI Connection

§       ODI Project Name

§       ODI Folder Name under the project

§       Import Status of Knowledge Module from DIH

§       Import Status of ODI procedures from DIH

Figure 7: Validation Summary

 

This illustration shows the Validation Summary status.

5.     Enter the details and click Save.

Fields in Mandatory Settings

The following table describes the fields in the Mandatory Settings:

Fields in Mandatory Settings

Serial No.

Fields

Description

1

ODI User

The ODI supervisor user name you defined when creating the master repository or an ODI user name you defined in the Security Navigator after having created in the master repository.

2

ODI Password

The ODI supervisor password you defined when creating the master repository or an ODI user password you defined in the Security Navigator after having created the master repository.

3

Use JNDI

If Yes, enter the Master Repository JNDI.

If No, enter Master Repository Database User, Master Repository Database Password, Master Database Driver, and Master Database Connection.

4

Master Repository Database User

Database user ID/login of the schema (database, library) that contains the ODI master repository.

5

Master Repository Database Password

Master Repository Database user password.

6

Master Database Driver

Specifies the driver required to connect to the RDBMS supporting the master repository created from the dropdown list. The default value is oracle.jdbc.OracleDriver. It is not changed when on the Oracle database.

7

Master Database Connection

The URL is used to establish the JDBC connection to the database hosting the repository. The format is jdbc:oracle:thin:@<Hostname/IP Address>:<Port Number>:<Service Name>

8

Master Repository JNDI

JNDI name for ODI Master Repository

9

Work Repository

The name of the work repository that is created previously (Example: WorkRep1).

10

Project

Enter the Project Name created in ODI.

11

Folder

Enter the folder name under the project created in ODI.  All the packages are created under this location.

12

Agent URL

Specify the agent URL where the ODI agent is running. This is used to execute a DIH connector from OFSAAI batch/RRF. This is not needed to do data mapping. The format is http://<Hostname/IP Address where ODI agent is running>:<PORT Number>/<Agent Context Name>

 

Optional Settings

NOTE:   

The following properties are optional and need not be specified if they are already available as environment variables in the server where the ODI agent is running.

 

To capture or edit the optional settings, follow these steps:

1.     Click the Optional tab to edit the optional ODI details. 

Figure 8: Settings Optional Tab

This table helps to capture or edit the optional settings.

2.     Enter the field details explained in Fields in Optional Settings.

3.     Click Add Add icon to add multiple rows for each agent. The Save Agent As dialog box is displayed.

4.     Enter the Agent Name and URL and click OK.

NOTE:   

It is mandatory to enter the agent's name.

5.     Click Edit Edit icon to modify the saved agent.

6.     Click Delete  Delete icon to delete the agent.

7.     Enter the details and click Save.

Fields in Optional Settings

The following table describes the fields in the Optional Settings.

Fields in Optional Settings

Serial No.

Fields

Description

1

Character Set (Applicable for File type Source)

This field is applicable if the source system type is File. You must specify the character set when you are using the SQL loader for data loading.

2

ODI Oracle Home

This field is applicable if the source system type is File. You must specify the Oracle Home path where the ODI agent is located.

3

Scenario Regeneration

This field allows a choice when Scenario Regeneration is performed in ODI. If you choose ‘Publish’, Scenario Regeneration is performed during Connector Publish in DIH. If you choose ‘Execution’, Scenario Regeneration is performed during each Connector execution.

NOTE: Scenario Regeneration is needed such that ODI tasks are refreshed or maintained up to date with source, destination, mapping, and related information, and parameters set through the ODI user interface, including those for performance optimization/tuning purposes.

Performing Scenario Regeneration during Connector execution is, therefore, a useful option. However, in certain circumstances, especially when several instances of the same Connector are executed concurrently, Scenario Regeneration during Connector execution can be suboptimal and even trigger execution errors.

Refreshing Application Data Interface

The next logical step in setting up DIH for use is to refresh the Application Data Interface information. Application Data Interface (ADI) is a logical abstraction of the OFSAA Data Foundation (FSDF or OIDF) data model that you interact with as you use DIH for data movements. The “Refresh ADI” step reads the data model uploaded into your instance of OFSAA and creates ADIs. This step is relevant both when a data model is first uploaded into a fresh OFSAA instance and when the data model is subsequently uploaded with incremental changes. THE “ADI Refresh” process compares the existing ADIs with those based on the updated model and reflects changes appropriately with additional ADIs, additional attributes for existing ADIs, or new data types for existing attributes, as applicable.

To refresh the ADI list, follow these steps:

1.     From the Data Integration Hub Designer window, select Configure and then select Refresh ADI. This Refresh Application Data Interface summary is displayed.

Figure 9: Refresh ADI Window

 

This illustration helps to Refresh ADI. It reads the data model uploaded into your instance of OFSAA and creates ADIs. This step is relevant both when a data model is first uploaded into a fresh OFSAA instance and when the data model is subsequently uploaded with incremental changes.

2.     Click Validate Datamodel to validate the datamodel. If there are datamodel issues, the Validate Datamodel window is displayed. This validates and identifies the issues in values specified by the user-defined properties for the physical/logical view in the OFSAA Data Model. Once executed, the utility log errors/issues are identified.

Figure 10: Validate Datamodel Window

 

This illustration helps to validate the datamodel. If there are datamodel issues, the Validate Datamodel window is displayed. This validates and identifies the issues in values specified by the user-defined properties for the physical/logical view in the OFSAA Data Model. Once executed, the utility log errors/issues are identified.

3.     You can search for an Object Name or Message. On validation, you receive a message. See the Data Model Validation Messages table for information on each message.

4.     Click Export to export the data model validation issues.

5.     Verify the information and click OK.

6.     Click Start to start the refresh of ADIs. The ongoing ADI refresh is displayed as follows:

 This illustration helps to start the refresh of ADIs. The ongoing ADI refresh is displayed.

7.     On successful invocation of ADI refresh, a message is displayed.

Figure 11: Refresh ADI Confirmation

 

Description of Refresh ADI Confirmation Window This illustration helps to start the refresh of ADIs. The ongoing ADI refresh is displayed.

8.     If you need a detailed running log, clickDownload icon download the log. A zip file is downloaded containing the detailed log for the execution. To view the log details, extract the log file from the zip folder.

9.     You can check the status:

§       Failed icon Failed icon Failed

§       Successful icon Successful iconSuccessful

§       Aborted icon Aborted iconAborted

§       Alert icon Alert icon Alert

§       Warning icon Warning icon Warning

§       Not Applicable icon Not Applicable iconNot Applicable

10.  Click the Run ID link on the Refresh ADI window. This displays the Changes, Alerts, and Error Messages. Under the Changes tab, you can view all the ADI Refresh details.

Figure 12: Refresh ADI Changes Tab

This illustration helps to view all the ADI Refresh details under the Changes tab. 

Figure 13: Refresh ADI Alerts Tab

This illustration helps to view all the ADI Alert details under the Alerts tab.

NOTE:   

Click Reload to check the status of the ongoing ADI Refresh process, at any time.

 

Abstraction of Model Changes for Data Movement / ETL Processing

DIH abstracts and automates many types of data model changes and their impact, minimizing manual steps to be undertaken. Refer to the following table for types of changes, DIH automation available, and manual actions to be taken, if needed.

Abstraction of Model Changes for Data Movement / ETL Processing

Scenario or Type of Change

Description

User Action

When the only logical name of an attribute is changed

ADI Refresh updates the logical name in the DIH repository.

No action is expected, changes are reflected automatically in connector/ADI.

When the only description of an attribute is changed

ADI Refresh updates the description in the DIH repository.

No action is expected, changes are reflected automatically in connector/ADI.

When the only logical name of an attribute is changed

ADI Refresh updates the domain in the DIH repository.

No action is expected, changes are reflected automatically in connector/ADI.

When both the logical name and domain of an attribute are changed

ADI Refresh updates the logical name and domain in the DIH repository.

No action is expected, changes are reflected automatically in connector/ADI.

When the physical name of an attribute is changed

ADI Refresh updates the physical name in the DIH repository.

Perform “Refresh Target Data Store”, and re-publish Connectors by first unpublishing and then publishing them.

When the data type of an attribute is changed

ADI Refresh updates the data type in the DIH repository.

Perform “Refresh Target Data Store”, and re-publish Connectors by first unpublishing and then publishing them.

When the precision/scale of an attribute is changed

ADI Refresh updates the precision/scale in the DIH repository.

Perform “Refresh Target Data Store”, and re-publish Connectors by first unpublishing and then publishing them.

When the physical name of an entity is changed

ADI Refresh updates the physical name in the DIH repository.

Perform “Refresh Target Data Store”, and re-publish Connectors by first unpublishing and then publishing them.

When the logical name (OFSAA Data Interset or SubType Name) of an entity is changed

ADI Refresh updates the logical name in the DIH repository.

No action is expected, changes are reflected automatically in connector/ADI.

 

Handling Model Changes with Impact on Data Movement / ETL Processing

The following types of model changes impact data movement performed through DIH.

·        One or more entities already configured for data movement in DIH is dropped

·        One or more columns already configured for data movement in DIH is dropped

If these types of changes are encountered, the ADI Refresh process displays the execution status as “Impact Identified”. In such cases, the affected data movement definitions and entities, attributes, or both, the absence of which impacts data movement / ETL processing. These are listed under the “Alerts” tab, as displayed in the following window.

Figure 14: Refresh ADI Alerts Tab

 Description of Refresh ADI Alerts Window This illustration displays the execution status as “Impact Identified”. In such cases, the affected data movement definitions and entities, attributes, or both, the absence of which impacts data movement / ETL processing. These are listed under the “Alerts” tab

The following model changes impact the connectors during ADI refresh.

Handling Model Changes with Impact on Data Movement / ETL Processing

Message

Action

The table which is dropped is already used in the connector

Unpublish and remove the ADI from the connector.

The column which is dropped is already used in the connector

Unpublish and remove the attribute references from the connector.

For an Insert type connector, remove the attribute reference from mapping and truncate filter expression.

For an Extract type connector, remove the attribute reference from the filter, join, lookup, derived column, mapping, and aggregation components.

 

Data Model Validation Messages

DIH has a built-in mechanism to validate the data model uploaded to OFSAA before ADI Refresh. This mechanism verifies whether metadata relevant to DIH – entity classification, sub-type name, and so on is available and valid for use. The following table details the messages that DIH posts upon failure of verification steps, underlying reasons, and mechanisms for resolution:

Data Model Validation Messages

Message

Reason

Resolution

Table Classification Missing

User-defined Property “OFSAA Data Interface Class” is not specified in the logical view of the table in the OFSAA Data Model.

Specify the value for User-Defined Properties in the OFSAA Data Model in ERWIN.

Sub Type Name Missing

User-defined Property “OFSAA Data Interface Sub-Type” is not specified in the logical view of the table in the OFSAA Data Model.

Specify the value for User-Defined Properties in the OFSAA Data Model in ERWIN.

Duplicate ADI Name

User-defined Property “OFSAA Data Interface Name” must be different for the specified tables.

Specify a unique value for OFSAA Data Interface Name UDP in the OFSAA Data Model in ERWIN.

No enabled application mapped

The application user-defined properties for all columns of the table does not have the value as “DL-MAN” or “DL-OPT”

Specify the “DL-MAN” or “DL-OPT” value for the application of user-defined properties.

ADI Name Missing

User-defined Property “OFSAA Data Interface Name” is mandatory and is not specified in the logical view of the table in the OFSAA Data Model.

Specify the value for User-Defined Properties in the OFSAA Data Model in ERWIN.

Invalid Table Classification

User-defined Property “OFSAA Data Interface Class” can have the value as R, S, or D only.

Specify a correct value for the mentioned user-defined property.

Invalid Subtype Name

User-defined Property “OFSAA Data Interface Sub-Type” is specified when there is no subType for ADI.

OFSAA Data Interface Sub-Type UDP is applicable when there are multiple subtypes for a given ADI. Leave it blank or the same as ADI name otherwise.

Configuring System Parameters

System Parameters are constant-value, run-time, or current-date variables intended for use with DIH. Apart from a seeded set of System Parameters, you can add, modify, or remove them as needed.

Understanding the Parameters Window

To access the Parameters window, follow these steps.

1.     From the Data Integration Hub Designer window, select Configure and then select Parameters. The Parameters Summary is displayed.

Figure 15: Parameters Window

 This illustration displays the Parameters Summary details.

2.     You can make use of the Search option to search for a specific Source.

3.     Click Export. The List of Parameters are exported to an Excel sheet with the following information:

a.     Parameter IDs

b.     Parameter Name

c.     Description

d.     Type

e.     Value

f.       Default Value

g.     Date Format

h.     Status

i.       Last Modified By and

j.       Last Modified Date

4.     Click Add icon Add icon to create a Parameter. For more information, see Defining a Parameter section.

 

Fields in Parameters Window

Fields displayed in the Parameters window are explained in the following table.

Fields in Parameters Window

Fields

Description

Fields marked in red asterisk(*) are mandatory

Parameter Name

The name for the placeholder that you want to define. For example, MISDATE, which can be used as a placeholder for Date. 

Parameter Description

The description for the parameter you want to define. In this example, the description can be, “MISDATE can be used to substitute the date values for each day, dynamically, in mmddyyyy format.”

Parameter Type

There are three parameter data types:

·        Constant data type is selected for substituting a constant value.

·        RunTime data type is selected for substituting a value, dynamically, in run time. In the example that is used here, MISDATE can be selected as Run Time because it is used to make a substitution dynamically.

·        CurrDate data type is selected for substituting a value as Current System Date.

Value

Only for constant types. Holds the actual value of the parameter.

Defining a Parameter

To define a new Parameter, follow these steps:

1.     Click Add Add icon Add icon to define a parameter on the Parameters Summary. The Parameters window is displayed.

Figure 16: New Parameter Definition

 

 This illustration displays the New Parameter Definition window where you can define a new parameter.

2.     Enter the fields, which are explained in the Fields and their Description section.

3.     Click Save.

4.     The Audit Trail section at the bottom of the window displays the information of the parameter created.

Modifying and Viewing a Parameter

You can edit or view an existing Parameter, other than Parameters which are in Published status.

NOTE:   

You cannot edit parameters in Published status.

 

To edit or view a parameter, follow these steps:

1.     To edit or view a Parameter, you can select the required parameter from the Parameters Summary.

2.     The details of the selected Parameter are displayed. You can modify or view the details.

3.     Only the Parameter description, Parameter Type, and Value / Default Value / Date Format can be edited on this window. Update the required details.

4.     Click Save to save the changes made.

Deleting a Parameter

To delete an existing parameter, follow these steps:

1.     On the Parameters Summary, click Delete Delete icon Delete icon. A confirmation dialog box is displayed.

2.     Click Yes. The Parameter details are deleted.

NOTE:   

Delete is enabled only in the following cases:

·        If the parameter is not in Published status.

·        If the parameter is not used by any higher object, for example, Connector/EDD.

·        If the parameter is pre-seeded.

 

Unpublishing a Parameter

You can unpublish a parameter only when all the following conditions are met:

·        The parameter is in Published status.

·        All the higher objects using the parameter are unpublished, for example, Connector/EDD.

To unpublish a parameter, follow these steps:

1.     Select the required parameter from the Parameter Summary. The details of the selected parameters are displayed.

2.     Click Unpublish.

NOTE:   

Parameters are published automatically by the system whenever the higher objects (EDD/Connector) which are using it are published.

 

Dependency

Clicking Dependency Dependency icon Dependency icon lists where the entire parent Parameter has a dependency.

Search and Filter

The Search and Filter option in the UI helps you to find the required information. You can enter the nearest matching keywords to search and filter the results by entering information in the search box. You can search for a parameter using either the name, description, status, or type.

For example, enter the Parameter keyword as ‘ODI’ in the search box. The entire Parameter name with ODI is listed.

Parameters in EDD and Connectors

For information on parameters in EDD and Connectors, see sections Parameters in EDD Definition and Parameters in Connector respectively.

Configuring External Data Sources

DIH supports data ingestion from relational databases (Oracle Database, IBM DATABASE/2, Microsoft SQL Server, Sybase, and Teradata), files (XML, EBCDIC, and ASCII), and Big Data (Hadoop, HDFS, and Hive). Data stores of each of these types are registered with DIH, configured as External Data Sources (EDS). Also, DIH supports the extraction of data as ASCII files, which can also be defined as EDS.

To understand the External Data Store window, follow these steps:

1.     From the Data Integration Hub Designer window, select Configure and then select External Data Store. The External Data Store Summary is displayed.

Figure 17: External Data Store Summary

 This illustration displays the External Data Store Summary window where you can configure the EDS.

2.     In the Source Systems section of the External Data Store Summary, you can define, edit, and delete a source.

3.     You can make use of the Search option to search for a specific Source.

4.     Click Export. The List of EDSs are exported to an Excel sheet with the following information:

a.     EDS IDs

b.     EDS Name

c.     Description

d.     EDS Type

e.     JDBS URL

f.       File Location

g.     Status

h.     Last Modified By and

i.       Last Modified Date

5.     Click Add icon Add icon to create an EDS. For more information, see section Defining an External Data Store.

External Data Store Fields

The following table describes the fields in the External Data Store window.

 Fields in External Data Store

Fields

Description

Fields marked in red asterisk(*) are mandatory

Name

Enter the name of the Source.

Description

Enter a description for the Source.

Type

The following source type for information on the additional fields specific to them are available:

·    DB2 Type

·    EBCDIC Type

·    File Type

·    HDFS Type

·    HIVE Type

·    Oracle Type

·    SQL Server Type

·    Sybase Type

·    Teradata Type

·    XML Type

 

DB2 Type

IBM DB2 is a family of database server products. These products support the relational model.

When you select EDS Type as DB2, the following fields are displayed.

Fields in DB2

Field

Description

Comments

JDBC URL

Provide the URL of the Database.

Example:

jdbc:db2://<hostname>[:<port>]/<database>

User ID

Enter the User ID.

 

Password

Enter the password.

 

Schema

Enter the schema name.

 

 

EBCDIC Type

Extended Binary Coded Decimal Interchange Code (EBCDIC) File is a binary code for alphabetic and numeric characters.

When you select EDS Type as EBCDIC, the following fields are displayed.

Fields in EBCDIC

Field

Description

Comments

File Location

Enter the absolute path of the data file landing area.

ODI agent must be available and running in the server from where the data file is located.

File Type

American Standard Code for Information Interchange (ASCII) File is a character-encoding scheme.

When you select EDS Type as File, the following fields are displayed.

Fields in ASCII

Field

Description

Comments

File Location

Enter the absolute path of the data file landing area.

ODI agent must be available and running in the server from where the data file is located.

Encryption at Rest

If a source file is encrypted or a destination file should be encrypted upon data extraction needs, choose the “Encryption at Rest” option and enter the Encryption Key File Path.

·        DIH must have access to the source file landing area.

·        The UNIX user, which is used for starting the agent, must have execution permission to DMT utility.

Example:  /landingzone/inputfiles

HDFS Type

Hadoop Distributed File System (HDFS) is an open-source and fundamentally a new way of storing and processing data. It enables distributed processing of huge amounts of data across industry-standard servers that both store and process the data and can scale without any limits HDFS is Big Data in a raw format.

When you select EDS Type as HDFS, the following fields are displayed.

Fields in HDFS

Field

Description

Comments

JDBC URL

Provide the URL of the Database.

Example:  hdfs://<Host>:<Port>

File Location

Enter the file location.

 

HIVE Type

HIVE provides a mechanism to project structure onto the data in Hadoop and Big data in a tabulated format.

When you select EDS Type as Hive, the following fields are displayed.

Fields in HIVE

Field

Description

Comments

JDBC URL

Provide the URL of the Database.

Example:  jdbc:hive://<Host>:<Port>/<Schema>

Driver

Enter the driver for the HIVE datastore.

For example, to connect to the Cloudera Hive server with JDBC 4.0 data standards, specify “com.cloudera.hive.jdbc4.HS2Driver” as a driver.

See the Cloudera document for more information about Cloudera JDBC drivers.

If Kerberos is enabled:

 

 

Principal User Name

Enter the Principal User Name.

 

Conf File Path

Enter the Kerberos Conf File path.

 

Key Tab Path

Enter the path of the Key Tab file, generated for the principal user.

 

If Kerberos is enabled:

 

 

User ID

Enter the User ID.

 

Password

Enter the Password.

 

Oracle Type

An Oracle database is a collection of data treated as a unit. The purpose of a database is to store and retrieve related information.

When you select EDS Type as Oracle, the following fields are displayed.

 Fields in Oracle

Field

Description

Comments

JDBC URL

Provide the URL of the database.

Example:

jdbc:oracle:thin:@//<hostname>:<port>/<servicename>

User ID

Enter the User ID.

 

Password

Enter a password.

 

Schema

Enter the Schema name in the upper case.

 

Encryption in Transit

Choose this option if you want the data to be encrypted while reading from the source.

 

 

SQL Server Type

Microsoft SQL Server is a relational database management system. It is a software product with the primary function of storing and retrieving data as requested by other software applications, which may run either on the same computer or on another computer across a network.

When you select EDS Type as SQL Server, the following fields are displayed.

Fields in SQL Server

Field

Description

Comments

JDBC URL

Provide the URL of the database.

Example:

jdbc:sqlserver://<hostname>\SQLExpress

User ID

Enter the User ID.

 

Password

Enter the password.

 

Schema

Enter the Schema name.

 

Sybase Type

Sybase produces software to manage and analyze information in relational databases.

When you select EDS Type as Sybase, the following fields are displayed.

 Fields in Sybase Type

Field

Description

Comments

JDBC URL

Provide the URL of the database.

Example:

jdbc:sybase:Tds:<hostname>:<port>

User ID

Enter the User ID.

 

Password

Enter the password.

 

Schema

Enter the Schema name.

 

Teradata Type

Teradata Corporation provides analytic data platforms, applications, and related services. Its products enable users to consolidate data from different sources and make the data available for analysis.

When you select EDS Type as EBCDIC, the following fields are displayed.

Fields in Teradata

Field

Description

Comments

JDBC URL

Enter the URL of the database.

Example:

jdbc:teradata://<hostname>

User ID:

Enter the User ID.

 

Password

Enter the password.

 

Schema

Enter the Schema name.

 

XML Type

Extensible Markup Language (XML) is a markup language that defines a set of rules for encoding documents.

When you select EDS Type as XML, the following fields are displayed.

Fields in XML

Field

Description

Comments

File Location

Enter the absolute path of the data file landing area.

The ODI agent must be available and running in the server where the data file is located.

Defining an External Data Store

To define a new source from External Data Store Summary, following these steps:

1.     Click Add icon Add icon to define a new External Data Store on the External Data Store Summary. The External Data Store window is displayed.

Figure 18: New External Data Store

 

This illustration displays the New External Data Store where you can define a new External Data Store.

2.     Enter the values in the fields as described in the External Data Store Fields section.

3.     The fields change depending on the Type option selected. For example, If Source Type is selected as File, the File Location field must be entered.

4.     Click Test Connection to test the connection details (User ID/ Password) for the database types DB2, HIVE, Oracle DB, SQL Server, Sybase, and Teradata.

5.     Enter these details and click Save.

Modifying and Viewing an External Data Store

To edit and view an external data store, follow these steps:

1.     To edit or view an EDS, select the required EDS from the EDS Parameter Summary.

NOTE:   

You cannot edit Published objects.

 

2.     The Audit Info section at the bottom of the window displays the information of the source created.

3.     EDS Name and Type cannot be edited.

4.     Click Save to save the changes made.

Deleting an External Data Store

To delete an existing EDS, follow these steps:

1.     On the EDS Summary, click Delete Delete icon Delete icon. A confirmation dialog box is displayed.

2.     Click Yes. The EDS details are deleted.

NOTE:   

Delete is enabled only in the following cases:

·        If the EDS is not in Published status.

·        If the EDS is not used by any object.

 

Unpublishing an External Data Store

You can unpublish an EDS only when all the following conditions are met:

·        The EDS is in Published status.

·        All the higher objects using the EDS are unpublished, for example, Connector/EDD.

To unpublish an EDS, follow these steps:

1.     Select the required EDS from the EDS summary. The details of the selected EDS are displayed.

2.     Click Unpublish.

NOTE:   

An EDS is published automatically by the system whenever the higher objects (EDD/Connector) using it are published.

 

Dependency

Clicking Dependency Dependency icon Dependency icon lists where the entire parent EDS has a dependency. That is, you cannot delete a child file without deleting the parent file.

Search and Filter

The Search and Filter option in the UI helps you to find the required information. You can enter the nearest matching keywords to search and filter the results by entering information on the search box. You can search for a parameter using either the name, description, status, or type.

For example, enter the Parameter keyword as ‘DRM in the search box. All the EDS names with DRM are listed.