Oracle® Fusion Middleware

Release Notes for Oracle Data Integrator

12c (12.2.1.2.6)

E81000-02

February 2018

1.1 Oracle Data Integrator Release Notes

These release notes contain information about the known issues associated with Oracle Data Integrator and the post-installation patches.

It includes the following sections:

1.1.1 What's New in Oracle Data Integrator?

To view the new features and significant product changes for Oracle Data Integrator in the Oracle Fusion Middleware 12 c release, see the New and Changed Features for Release 12c (12.2.1.2.6) section in Administering Oracle Data Integrator.

1.1.2 Oracle Data Integrator 12.2.1.2.6 ReadMe File

A ReadMe file is included in your distribution. It is located in the top level directory of the zip. The ReadMe file includes information about this release (features, prerequisites, install/uninstall instructions). You must use the ReadMe file to install ODI 12.2.1.2.6. Please read the entire ReadMe file before proceeding.

1.1.3 Oracle Data Integrator Console Issues and Workarounds

This section contains information on the following:

1.1.3.1 Accessibility Settings are Not Working

Accessibility settings can only be applied to the components whose accessibility settings are not managed at the ADF level. [20584947]

1.1.3.2 Starting Oracle Data Integrator Studio on Windows Operating Systems

On Windows operating systems only the user who installed ODI can start ODI studio. No other user has privileges to start ODI Studio. [23070381]

1.1.3.3 No option for closing opened tabs of ODI Console in MS EDGE

When you login to ODI Console and access your work repository location through Microsoft Edge 20.10240 and Microsoft Edge 25.10586.0.0 browsers of Windows 10 Enterprise operating system and try to open/view any tabs, you do not have an option to close the opened/viewed tabs. For Example — When you open any of the knowledge module(s) from the path Design Time -> Global Objects -> Global Knowledge Modules, you do not have an option to close the opened knowledge module tab through MS Edge.[23334641]

1.1.4.1 Importing 11g Master Repository XML into 12c removes instance level permissions

If you import 11g master repository XML into 12c using XML file import, the object instance level permissions are lost. If needed, you must grant the object instance level permissions to the users again after completing the upgrade. [23608304]

Note:

Upgrading repositories using file import/export option is not supported. You must use the Upgrade Assistant to upgrade the repositories.

1.1.4.2 SAP Extraction Programs

As part of the 12c upgrade, SAP KMs will use GUID based program names and function group names. Due to this SAP extraction programs either need to be redeployed or all interfaces must have the old program name set. [14538105]

1.1.4.3 11g RKM SAP ERP and 11g RKM SAP BW not supported in Legacy mode

The UI mode of 11g RKM SAP ERP and 11g RKM SAP BW do not work in ODI 12c legacy mode. Either use non-UI mode or upgrade to latest SAP connector version. [14523712]

1.1.4.4 Continue Repository Creation with MySQL 5.7

During the Repository Creation phase of Oracle Fusion Middleware installation, when you continue the installation process with the database version MySQL 5.7, the following certification warning message is displayed:
The selected database is more recent than the supported list of certified databases for this version of Oracle Fusion Middleware. For the most recent list of certified databases, refer to the Supported System Configurations information on the Oracle Technology Network.

Click Ignore and continue with the installation process.

The main intention of displaying this message, is to inform you about the selected database version and not to stop/quit the installation process. [23242675]

1.1.4.5 999 is a Prohibited Master Repository ID

999 is a prohibited master repository ID and should not be used. [21083009]

1.1.4.6 Domain Assisted Schema Upgrade (DASU) Does Not Pre-populate ODI Supervisor Credentials

In the Oracle Fusion Middleware Upgrade Assistant, when the All Schemas Used by Domain option is selected, the Supervisor credentials for ODI are not pre-populated in the first instance as the domain does not contain them. If there are multiple ODI schemas, the Upgrade Assistant populates the user entry using the first set of credentials. [20323393]

1.1.4.7 VCS Profiles Cannot be Renamed or Duplicated

Users that are to be assigned the VCS Admin or Release Manager role have to be assigned the VCS_VERSION_ADMIN or RELEASE_MANAGER profile.

If these profiles are renamed or duplicated, the VCS Admin and Release Manager roles will not function. [24678079]

1.1.4.8 Unable to Schedule Job to OracleDIAgent when Client and Server are in Different Time Zones

A job cannot be scheduled to OracleDIAgent when the client (studio and agent) and server (repository) are set to different time zones. [23216335]

To sync the time zone for studio, perform the following steps:

  1. Go to Studio home and edit the studio configuration file $ODI_HOME/studio/bin/odi.conf.

  2. Set the AddVMOption -Duser.timezone to the repository-based time zone.

To sync the time zone for agent (managed server), perform the following steps:

  1. Modify the WebLogic domain environment settings for your domain in the following files:

    • On UNIX platforms: setDomainEnv.sh

    • On Windows platforms: setDomainEnv.cmd

  2. Within the file, edit the EXTRA_JAVA_PARAMETERS environment variable definition by adding the following Java argument: -Duser.timezone=required timezone

    For example, on Windows, set:

    set EXTRA_JAVA_PARAMETERS=%EXTRA_JAVA_PARAMETERS% -Duser.timezone=GMT
    

1.1.4.9 Missing Records Found if Source Database is Oracle 12.1

If source database of ODI is set to 12.1, then some queries involving hash joins may return wrong results in cases where the hash join receives rowsets as input and produces one row at a time as output. The query contains a HASH JOIN and the query plan (using ADVANCED format flag as shown below)shows a hash join that does not have the "(rowset=...)" indication in the projection information section whereas the right child of the hash join does have "(rowset=...)" in its projection information.

select plan_table_output from table (dbms_xplan.display_cursor('&sql_id', null,'ADVANCED')); 


Plan hash value: 3740981006
----------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                       |  1597 |  1606 |     8   (1)| 00:00:01 |
...
|*  4 |     HASH JOIN                          |                       |  1597 | 67074 |     8   (0)| 00:00:01 |
|   5 |      TABLE ACCESS FULL                 | K_POSITION            |   786 | 16506 |     4   (0)| 00:00:01 |
|   6 |      TABLE ACCESS FULL                 | KW_POSITION           |  1597 | 33537 |     4   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("KWP"."KONTOPOSITION_FK"="KP"."KONTOPOSITION_ID")



Column Projection Information (identified by operation id):
-----------------------------------------------------------

   4 - (#keys=1) "KP"."POSIT_FK"[NUMBER,22],
       "KP"."SOND_FK"[NUMBER,22], "KWP"."WAE"[VARCHAR2,20]             ------>> No rowset in HASH JOIN  Column Projection information                  
   5 - (rowset=256) "KP"."KONTOPOSITION_ID"[NUMBER,22],
       "KP"."SOND_FK"[NUMBER,22], "KP"."KONTO_FK"[NUMBER,22]
   6 - (rowset=256) "KWP"."POSIT_FK"[NUMBER,22],                       ------>> (rowset=256) information 
       "KWP"."WA"[VARCHAR2,20]

To overcome this issue you can upgrade your database to DB 12.2 or update the patch 12.1 using DB PSU 12.1.0.2.160419 or patch from DB Bug 22173980. [22173980]

1.1.4.10 Unable to Include Dependencies while Creating Version

When you follow the below steps:

  • Enable GIT/Subversion

  • Enable wallet

  • Create connection to GIT/Subversion

  • Add mapping to VCS

  • Modify mapping

and then terminate ODI studio and start again to create a version for mapping including dependencies, you get a null pointer error.

To overcome this issue, as a workaround

  • Navigate to Team -> Settings - > Edit Connection and click OK

    The wallet password dialog appears.

  • Enter the wallet password and then create version with dependency.

You can successfully create version for mapping including dependencies. [25168395]

1.1.4.11 Data Server Password Must Not Exceed 35 Characters

The Password field that is specified while creating a new data server in Topology in ODI Studio must not exceed 35 characters. For example, if the password for a technology is ‘Xcter23lnbvWE3478klnksddchv89$%jewwoSD983e’, then it will consider only the first 35 characters (i.e. ‘Xcter23lnbvWE3478klnksddchv89$%jeww’). This is required when creating a data server for technologies like Salesforce, as the passwords here are generated from the backend and are generally long. Also, an error message is displayed if we specify more than 35 characters.

1.1.5.1 Preferences that are Not Used in Oracle Data Integrator Appear in ODI Studio

Preferences that are not used in ODI are getting picked up from the JDeveloper IDE by default and these features appear in ODI Studio > Tools > Preferences. [21656747]

1.1.5.2 Attributes are Not Copied when Duplicating a New Datastore

If you attempt to duplicate a newly created datastore with attributes without first closing the tab of the newly created datastore, the attributes are not copied.

As a workaround, save and close the newly created datastore with attributes before selecting Duplicate Selection. [21572433]

1.1.5.3 Unable to Generate a Scenario Using the Client Library Package

If you attempt to generate a scenario for a map using the client library package, the following error is displayed:

java.lang.NoClassDefFoundError: groovy/lang/Binding

As a workaround, copy MW_HOME/oracle_common/modules/groovy-all-2.3.7.jar locally and add the jar file to the classpath. [21510593]

1.1.5.4 Non-ASCII Characters in a Hive Table are Not Displayed Properly

Non-ASCII characters in a Hive table that is based on a utf-8 encoded file are not displayed properly. As a workaround, specify -J-Dfile.encoding=utf8 and start ODI Studio to view Non-ASCII characters in a Hive table. [19632983]

1.1.5.5 Editing Expanded Submap of Dimension or Cube component

You are not allowed to edit the expanded map of a dimension or cube component. The changes done in expanded map is not persisted and are not saved. [23110100]

1.1.5.6 SCD2_CURRENT_FLAG is not used by ODI in current built-in pattern

In the Dimension Editor when you are navigating from Levels Table -> Level Attributes Table, the SCD2 (Slowly Changing Dimensions) Setting drop-down list displays Current Record, as one of its values. The current record value is not used in the current built-in pattern of ODI. [23239046]

1.1.6.1 LKM Hive to File Direct Fails when Exporting to HDFS

When executing a mapping using LKM Hive to File Direct, it fails and the following error is displayed:

ODI-1227: Task Unload Hive data-LKM Hive to File Direct- fails on the source connection HIVE_DATA_SERVER

This is caused due to Hive bugs, HIVE-5672 and HIVE-6410, which cause the INSERT OVERWRITE statement to fail when writing to HDFS. Please note that these Hive bugs are already fixed and the issue is resolved when upgraded to a recent version of CDH and Hortonworks. [21529011]

1.1.6.2 Log Files are Deleted Even in Case of Failure when Using the OdiOSCommand on Oozie

Many KMs that use OdiOSCommand use the OUT_FILE/ERR_FILE parameters to redirect output into log files.The directory for such files is based on the KM option TEMP_DIR, which uses a default value of System.getProperty("java.io.tmpdir"). This causes ODI on Oozie to use an Oozie job temporary directory, which gets cleaned up on job completion, irrespective of whether the job was successful. This results in the log files not being available after execution.

As a workaround, when executing on Oozie, overwrite the KM option TEMP_DIR to a specific temporary directory. [21232650]

1.1.6.3 Oozie Initialization Fails

Oozie initialization fails and the following error is displayed:

java.io.IOException: E0504 : App directory <dir_name> doesn't exist OR ODI-1028: There are issues with the Log Retriever components. No Log Retriever flow with name <name> is running.

The issue occurs on pure CDH5.4.0+ pseudo/multi node clusters.

As a workaround,

  1. Make sure that oozie share lib is already created using the following command:

    oozie-setup sharelib create -fs hdfs:///user/oozie -locallib <path to local folder [oozie-sharelib-yarn]>
    

    Note:

    Folder oozie-sharelib-yarn is local to the oozie setup. After creating the sharelib, you can verify the sharelib on HDFS at the location hdfs:///user/oozie/share/lib/lib_<timestamp>

  2. Add the following properties to oozie-site.xml. These properties are needed for Oozie to obtain the hadoop configuration files to access HDFS. In the first property value, add the path after "*="

    <property> 
    <name>oozie.service.HadoopAccessorService.hadoop.configurations</name> 
    <value>*=<replace_this_with_path_to_hadoop_configuration_folder For Example:/etc/hadoop/conf></value> 
    <value>*=<replace_this_with_path_to_hadoop_configuration_folder 
    For Example:/etc/hadoop/conf></value> 
    </property> 
    <property> 
    <name>oozie.service.WorkflowAppService.system.libpath</name> 
    <value>hdfs:///user/oozie/share/lib</value> 
    </property>
    
  3. Restart your oozie and hadoop services. [21410186]

1.1.6.4 Error Displayed During Oozie Initialization

There is an issue with the OdiLogRetriever.properties file and the oozie.coord.application.path value does not get appended to it and the following error is displayed: [21410186]

E0504: App directory doesn't exist 

1.1.6.5 SQOOP KMs Fail on Oozie

KMs using SQOOP fail when executed on Oozie on a CDH version prior to 5.4.1.

As a workaround, set KM option EXTRA_HADOOP_CONF_PROPERTIES to --skip-dist-cache. Another workaround for this issue is to upload all SQOOP jars into the HDFS directory reported in FileNotFoundException. [21232570]

For example:

hdfs dfs -mkdir -p /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars 
hdfs dfs -copyFromLocal 
/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars/* /opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/jars 

1.1.6.6 Teradata and SQL Server Do Not Allow ORDER BY in Subqueries

Teradata and SQL Server do not allow ORDER BY in subqueries. [20873100, 20816875]

1.1.6.7 Disregard Failed to set setXIncludeAware(true) for parser warnings

When you execute Pig or Oozie workflows through ODI, you may encounter warning messages such as, Failed to set setXIncludeAware(true) for parser, regarding xml parsing failures in the ODI logs or studio console. This error occurs when JAVA xerces parser gets used for PIG execution, as the default implementation does not support XIncludeAware feature in xml parsing.

As a workaround, add xmlparserv2.jar in classpath of the Pig Dataserver. [21238180]

1.1.6.8 Pig Does Not Provide Implicit Type Conversion

When specifying constant expressions, the datatype for the constant must exactly match the attribute datatype because Pig does not provide implicit type conversion. For example, if the attribute is defined as DOUBLE, the constant expression for this attribute should be set to 999.0 instead of 999. [20808984]

1.1.6.9 Mapping Execution Fails in Pig

When a mapping is processed using Pig and there is an Aggregate component in the Pig staging area, the Having clause must be set differently from similar mappings for SQL-based technologies. [20723728]

1.1.6.10 Complex Aggregation Not Supported by Pig Latin

When using the Aggregate component in Pig staging, you cannot specify a complex expression in an aggregate function, for example, SUM(source.col1 + source.col2). This kind of aggregation is called "complex aggregation" and Pig Latin does not support this. If complex expression is needed, the Expression component must be added to the mapping ahead of the Aggregate component. [20302859]

1.1.6.11 Mapping Editor May Not Display All Template IKMs

The Mapping Editor may not correctly list the imported Template IKMs for selection. To list the imported IKMs, you must change the Target Integration type from its default (Control Append) to either Incremental Update or None. [20583432]

1.1.6.12 Date Comparison May Not Work as Expected if the Date is a String Datatype

In the Spark project, if the source file uses File technology, ODI converts the Date into a string datatype. This may cause the Date comparison to fail. [20029929]

1.1.6.13 LKM File to Oracle (External Table) Limitation

When executing a mapping, it may fail at the Create Work View task and the following error may be displayed:

ODI-1228: Task Create work view-LKM File to Oracle (EXTERNAL TABLE)- fails on the target connection SVR2_ORACLE.
Caused By: java.sql.SQLSyntaxErrorException: ORA-00955: name is already used by an existing object

This happens when the work table name is truncated to meet the maximum length specified in Oracle DB. To resolve this issue, check the Use unique Temporary Object Names option in the Physical Mapping tab. [20142371]

1.1.6.14 XKM SQL Distinct Limitation

When a mapping is created with Oracle as source and Oracle as target using a Distinct component and the XKM SQL Distinct is selected in the DISTINCT node, the mapping fails and the following error is displayed:

The physical node DISTINCT_ cannot be supported by technology Oracle on execution unit src_UNIT of mapping Mapping New_Mapping[11] owning folder=ODIOGG.First Folder

To resolve this issue, upgrade the topology information so Support Distinct Operator is set to True. [20234590]

1.1.6.15 The UNION_DISTINCT Pig Operator Does Not Remove Duplicate Outputs

The UNION Pig operator uses the following modifiers to specify the uniqueness characteristic: [20368827]

  • unspecified – Perform a DISTINCT operation on output

  • DISTINCT – Perform a DISTINCT operation on input, but not output

  • ALL – Do not perform a DISTINCT operation on input or output

1.1.6.16 Log Level and Log File Not Displayed in the Complex File Dataserver Properties

When creating a Complex File dataserver, the log level (ll) and log file (lf) properties are not displayed in the Properties tab. [20377218]

1.1.6.17 BinaryType Data Type Not Supported in Spark 1.1

The Hive datatype, BinaryType, is not supported in Spark 1.1. When using LKM Hive to Spark with Spark 1.1 the following error is displayed in the Spark execution log: [20260906, 20391714]

HIVE ValueError: not supported type: <type 'bytearray'>

1.1.6.18 Hive Complex Datatypes Not Supported by LKM Spark to Hive

The following Hive complex datatypes are not supported:

  • MapType

  • UnionType

  • ArrayType

Using these complex datatypes causes an unknown issue. [20141453, 20391743]

1.1.6.19 Spark Execution Supports only YARN Deployment

It is recommended to run Spark applications on YARN, as ODI supports only yarn-client and yarn-cluster mode executions along with a runtime check. Please switch to YARN execution, if you have been using other Spark execution modes. [24846472]

If switching to YARN execution mode is not possible or you wish to continue with unsupported Spark execution modes, the following DataServer property must be added to the Spark DataServer:

odi.spark.enableUnsupportedSparkModes = true 

Also, please note that no Support Requests can be raised regarding the unsupported Spark execution modes.

1.1.6.20 Spark-Cassandra: Permission Errors in YARN-client mode

When connecting to Cassandra sources or targets using "LKM SQL to Spark" or "LKM Spark to SQL", the JDBC driver parameter or property SchemaMap must not be used in YARN-client mode. Due to conflicting owners in the YARN-client execution model, the WebLogic JDBC Driver for Cassandra will encounter file permission problems and displays errors messages such as:
py4j.protocol.Py4JJavaErrorpy4j.protocol.Py4JJavaError: An error occurred while calling o140.jdbc. 
: java.sql.SQLException: [FMWGEN][Cassandra JDBC Driver][Cassandra]Unable to create local database file: $$ The cause: $$ 

This error is often caused by the driver not having write access to the target directory. [24928801]

1.1.6.21 Known Datatype Issues using Spark 1.6

Due to limitations in Spark 1.6, the following Oracle datatypes cannot be handled using LKM SQL to Spark or LKM Spark to SQL [25047069] :

  • Use of FLOAT and REAL will cause the following ValueError:
    (ValueError(u'Could not parse datatype: decimal(38,-127)',) 
    
  • Use of extended TIMESTAMP and INTERVAL datatypes such as: TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, INTERVAL DAY TO SECOND, INTERVAL YEAR TO MONTH will cause the following errors:

      py4j.protocol.Py4JJavaErrorpy4j.protocol.Py4JJavaError: 
      An error occurred while calling o43.jdbc.: 
      java.sql.SQLException: Unsupported type -101  
    

1.1.6.22 Unable to Store Alias Error in Pig

If the mapping execution in Pig fails and the Unable to store alias error is displayed, the pig.optimizer.rules.disabled property for the Pig server should be set to FilterLogicExpressionSimplifier. [20520865]

1.1.6.23 KMs Replaced During Repository Upgrade

By default, all loaded KMs in the repository are replaced during repository upgrade, irrespective of whether they are modified or not. Do not upgrade the KMs during repository upgrade.

The following are the workarounds to upgrade the KMs:

  • If you have SAP KMs, you must not upgrade the KMs during repository upgrade. The new SAP KMs require new ODI SAP components. Using new SAP KMs with old ODI SAP components causes any SAP mappings to fail.

    As a workaround, uncheck Replace KMs with Mandatory Updates when upgrading the ODI repositories. To upgrade SAP KMs, follow the upgrade instructions given in the Application Adapters Guide for Oracle Data Integrator for the respective ODI SAP adapter.

  • If you have any custom KMs, the customizations are lost if you upgrade the KMs during repository upgrade.

    As a workaround, uncheck Replace KMs with Mandatory Updates when upgrading the ODI repositories. After you upgrade the repositories, manually replace only those KMs that you want to upgrade.

1.1.6.24 Erroneously Published SDK API Classes Removed from the 12c Javadocs

Due to a bug in Javadoc generation, 41 internal classes were erroneously published in the 12.1.2, 12.1.3, and 12.1.3.0.1 public SDK API Javadocs. These classes were intended for internal use and have been removed from the 12c public SDK APIs. The classes removed from the 12c Javadocs are listed below and if you are using any of these classes in your program, correct your program and remove their usage: [21700125]

  • AdapterException

  • ComponentDefinitionParser

  • ComponentRegistryHelper

  • ExecutionUnit.GenerationType

  • FCONamedObject

  • FCOPropertyOwner

  • FCORoot

  • IMapReferenceOwner

  • IMappingObject.SyncState

  • IModelObjectChange

  • IModelObjectChange.ChangeType

  • IObjectAdapterFactory

  • LocationAdapterBase

  • MapAttribute.ConnectionTypeInfo

  • MapAttribute.ConnectionTypeSelector

  • MapAttribute.DefaultConnectionTypeSelector

  • MapComponent

  • MapComponentOwner

  • MapComponentType.uidef

  • MapPhysicalDesign.ContextualComponentTreeNode

  • MapPhysicalDesign.ExecutionUnitConfiguration

  • MapPhysicalDesign.ExecutionUnitGraph

  • MapPhysicalDesign.ExecutionUnitGraphNode

  • MapPhysicalDesign.MapPhysicalDesignConfig

  • MapPhysicalDesign.NodeConfiguration

  • MapPhysicalDesign.PushDirection

  • MapPhysicalNode.RMCStackPropertyManager

  • MapRootContainer

  • MappingGenericTechnology.MappingLanguage

  • MappingGenericTechnology.MappingLanguageElement

  • MappingGenericTechnology.MappingSubLanguage

  • NamedObject

  • OdiComponent

  • OdiInterface.IPersistenceComparable

  • PropertyOwner

  • ResourceLoader

  • ResourceLoader.ResourceCandidate

  • ReusableMappingComponent.RMCConnectorPointDelegate

  • Root

  • RootIssue.TextPos

  • TargetLoadOrderException

1.1.6.25 Erroneous Records Handling Removed from ODI 11g and 12c Documentation

ODI File Driver does not have Erroneous Records Handling capabilities in releases 11g and 12c. The section on Erroneous Records Handling has been removed from ODI documentation for releases 11g and 12c. This will, however, be restored in a future release. [23182473]

1.1.7 Post-install Patch Information for Oracle Data Integrator 12c

You can find out more information on the post-installation patches for Oracle Data Integrator 12c.

After installing Oracle Data Integrator 12c (12.2.1.1), perform the following steps:

  1. Make a backup of your ODI repository schema.
  2. Upgrade all ODI repositories associated with the installation using the Upgrade Assistant. See your Upgrade documentation for detailed upgrade instructions.

    Note:

    Once the ODI repository is upgraded it cannot be reverted back even if you remove the patch. So make sure you make a proper backup of your existing ODI repository so that it can be restored if you remove this patch in the future for any reason.

  3. For setting up new domains with this patch, follow the instructions in Installing and Configuring Oracle Data Integrator.
  4. Clearing of the JDev cache is required for all installations where the ODI Client is to be launched:
    • For UNIX platforms:

      Locate system12.2.1.0.0 in your Home directory and remove it.

      For example: rm -rf $HOME/.odi/system12.2.1.0.0

    • For Windows platforms:

      Locate system12.2.1.0.0 in your Home directory and remove it.

      For example: delete C:\Users\<username>\AppData\Roaming\odi

  5. Start ODI Studio.
  6. Depending upon the installation type, start Standalone Agent or all servers (AdminServer and all Managed server(s)).

1.1.8 Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support

Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.


Oracle Fusion Middleware Release Notes for Oracle Data Integrator, 12c (12.2.1.2.6)

E81000-02

Copyright © 2010, 2018, Oracle and/or its affiliates. All rights reserved.

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle.