Oracle® Cloud
Known Issues for Data Integration Platform Cloud Service
E87303-14
April 2019
Known Issues for Oracle Data Integration Platform Cloud
Learn about the issues you may encounter when using Oracle Data Integration Platform Cloud and how you can work around them.
General
Here are the known issues related to more general topics.
Topics
Not all browsers are supported
Oracle Data Integration Platform Cloud supports the following web browsers:
Web Browser | Version |
---|---|
Microsoft Internet Explorer | 11 and later |
Google Chrome | 42 and later |
Mozilla Firefox | 38 and later |
Apple Safari | 7.x and 8.x |
Microsoft Edge | For Windows 10 (without Java), which can run Java Web-Start |
Backup/Restore option performs operations with the Oracle user.
When you perform backup/restore, backup is performed automatically with
the default oracle
user and not the opc
user that
you normally use to log in to your VMs. The backup/restore option also doesn’t use
the username you use to log in to My Services, to get to your
service instance’s detail page to perform the backup/restore.
There’s no workaround.
Patch does not support rollback
After you’ve patched your Oracle Data Integration Platform Cloud and attempt to roll it back, you’ll eventually receive a failure message stating the rollback failed and is not supported for this version.
Patching for Data Integration Platform Cloud instances fails if associated Oracle Database Cloud isn’t running
If you apply a patch to a DIPC instance when its associated Oracle Database Cloud Service is not running, the pre-check for the patch fails and you can’t apply the patch. After you start the Database Cloud Service instance, you must also restart the DIPC instance’s admin server for the patching to work.
Error during 11g replication on Cloud to Cloud and On-Premises to Cloud
Customers using Oracle 11g as the on-Prem source DB will encounter the following error when testing replication on Cloud to Cloud as well as On-Prem to Cloud:
ERROR OGG-02912 PATCH 17030189 is required on your Oracle Mining database for trail format RELEASE 12.2 or higher ERROR OGG-01668 PROCESS ABENDING
As a workaround, the DB patch 17030189 should be applied on Source Oracle 11g instances.
Error during Importing Deployment Archive (DA) into DIPC
You can ignore the Gateway Timeout error while importing the Deployment Archive (DA) into DIPC. This error is displayed when DA takes more than one minute to import, however, DA will be imported successfully.
Re-importing Deployment Archive
Re-importing the Deployment Archive with same scenario in ODI is ignored
as Nothing to Apply
Supporting Autonomous Transaction Processing Cloud Service (ATP) in DIPC
To support ATP in DIPC, follow the steps below:
- Create deployment archive Oracle (source) to ATP (target) by using Oracle technology in ODI Studio.
- Use the same KMs as ADWC while creating the mapping. For staging purpose use File and Object storage.
- Create ATP Connection in DIPC using Oracle Autonomous Data Warehouse Cloud option.
Upgrade
Here are the known issues related to upgrade.
Topics
-
Impact of 18.4.3 upgrade on Synchronize Data and Data Prep Tasks
-
Upgrading Data Integration Platform On Premise Agent to 18.4.3
-
EDQ console URL doesn’t appear in Data Integration Platform Cloud instance upgrades
-
Data Integration Platform Cloud agent version is not updated after binary upgrade
-
Instances upgraded to version 18.1.3 display erroneous link to non-existent sample applications
-
Data entities not created for Connections upgraded from version 17.4.5
Impact of 18.4.3 upgrade on Synchronize Data and Data Prep Tasks
The 18.4.3 upgrade changes how DIPC accesses data sources on-premises in the Initial Load phase of a Synchronize Data Task as well as in Data Preparation Tasks. If you are using one of these Tasks with data sources on-premises, then you will need to recreate the Tasks after the 18.4.3 upgrade.
Upgrading Data Integration Platform On Premise Agent to 18.4.3
Some manual steps are required to upgrade the DIPC Agent to 18.4.3, if the DIPC Agent in 18.3.3 was configured with ODI via a root user.
From 18.4.3, Agent does not require root permissions, however, it can only be started by the user, which is used for installation.
Clean up existing agent
-
Login as root.
-
Check if all ODI Execution tasks are complete. You can check in Monitor menu in DIPC.
-
Stop DIPC Agent and ensure all agent processes are stopped. Use ${AGENT_UNZIP_LOC}/dicloud/agent/dipcagent001/bin/stopAgentInstance.sh command to stop DIPC agent.
-
Then use "ps -ef | grep agent.properties" command to verify if the DIPC Agent is stopped. If the Agent processes are still running, use "kill -9 <pid>” command to kill the processes.
-
Stop My SQL port 3307. Use ps -ef | grep mysql command to find the port.
-
Clean the files created as a root user in /tmp directory, such as
dipc_agent_start.log, odi-seed.log, odi-seed.log.lck, rcu_input_file, rcu_response.txt
. -
Delete
/tmp/createOdiStandaloneDomain.py
Install new Agent
-
Exit as Root.
-
Delete dicloud directory from the location where you unzipped the DIPC Agent installer.
-
Download DIPC Agent installer again from DIPC user interface and unzip it.
-
Configure and Start the Agent.
EDQ console URL doesn’t appear in Data Integration Platform Cloud instance upgrades
If you upgrade a Data Integration Platform Cloud instance that has Governance Edition, then you won’t have the EDQ Console URL commonly available through the user menu of Data Integration Platform Cloud.
To fix this issue:
-
Connect to the admin VM of the Data Integration Platform Cloud instance.
-
Open the
suite.properties
file located in/u01/app/oracle/suite
suiteEdition=GE >> /u01/app/oracle/suite/suite.properties
-
Perform a secure copy of the properties file to other managed VMs in the Data Integration Platform Cloud instance
scp /u01/app/oracle/suite/suite.properties
<host2>:/u01/app/oracle/suite/suite.properties
-
Restart WLS servers from console.
Data Integration Platform Cloud agent version is not updated after binary upgrade
The Data Integration Platform Cloud agents have their version displayed as 17.4.5 even after the binaries are upgraded to version 18.1.3.
Instances upgraded to version 18.1.3 display erroneous link to non-existent sample applications
Instances that were of version 17.4.5 and subsequently upgraded with a patch display a Sample Application field in the Instance Overview page. The Sample Application field contains a link to a page that displays an error.
This is because there are no sample applications for Data Integration Platform Cloud. The link also does not appear for new instances of version 18.1.3 or higher.
Data entities not created for Connections upgraded from version 17.4.5
Connections created in version 17.4.5 and upgraded to 18.1.3 do not have their data entities created and listed in the Catalog page.
As a workaround, perform the Refresh Data Entities action in the Hamburger menu for each connection.
Databases
Here are the known issues related to databases.
Topics
Oracle Container Databases (CDB) connection type missing in 18.1.3 environment
The Oracle CDB connection type option does not appear in the Create Connection page. This happens because the seeding is missing after patching in the 18.1.3 environment.
As a workaround, perform the following steps to complete the seeding:
-
Create an SSH connection to the Cloud instance using private key file.
For example,
ssh -i <Key_file_path>/<private_key_file>.txt opc@10.89.108.18
-
Change the user to a super user.
For example,
[opc@dipc1745patchqa1-wls-1 ~]$ sudo su – oracle
-
Fire the following service command:
curl -X POST http://hostname:port/dicloud/metadata/v1/seedData/refresh
Container Databases (CDB) are not supported with Classic DB
When the source database is CDB and the target database is a Classic DB, the Data Pump process fails.
For the sync task to be successful, ensure that both the source and target databases are Classic DBs.
The Object Storage Connection Requires a Prefix
Based on the type of connection, the Object Storage Connection may require the prefix ‘Storage-’
to be added as a part of domain name when used for Data Integration Platform Cloud.
DIPC on Oracle Cloud Infrastructure (OCI)
Here are the known issues related to DIPC on Oracle Cloud Infrastructure (OCI) Data Integration Platform Cloud.
Topics
Cannot access Enterprise Data Quality Case Management
Due to a bug in Enterprise Data Quality (EDQ) some users may be unable to access the EDQ Case Management application if their DIPC host URL is longer than 80 characters. There is no workaround at this time, please reach out to our Support team if required and refer to Bug 29149580.
Autonomous Data Warehouse Cloud (ADWC) Tasks Execution Failing in DIPC on Oracle Cloud Infrastructure (OCI) DIPC
To run an ADWC Execution Task successfully in DIPC on Oracle Cloud Infrastructure (OCI) DIPC, the Java version should be JDK 1.8.0_161 or higher on the machine where Agent is installed.
If the Java version is below JDK 1.8.0_161, follow the steps below:
- Download and install Java Cryptology Extension Unlimited Strength Jurisdiction Policy Files.
- It contains three files, copy the files to
JDK_HOME\jre\security
folder. - Open
JDK_HOME\jre\lib\security\java.security
and addsecurity.provider.11=oracle.security.pki.OraclePKIProvider
below other entries.- security.provider.10=sun.security.mscapi.SunMSCAPI, if already not present.
- Restart the Agent.
Agents
Here are the known issues related to agents.
Topics
-
Data Entity Attribute and Data tab are displaying empty lists in Data Integration Platform Cloud
-
Delivery to Big Data is not supported with an agent without a Big Data component
-
Updating ODI Tasks Java Security Options for Oracle Autonomous Data Warehouse Cloud
-
Agent is Unreachable But Error Message Indicates the Schema Cannot be Found
- Random Selection of Agent for Running ODI Execution Task
- Change ODI Agent Memory Value For ODI Remote Execution
Data Entity Attribute and Data tab are displaying empty lists in Data Integration Platform Cloud
Data profiling is not supported through the DIPC Agent. If the Connection is not accessible directly from DIPC, then DIPC is not able to profile.
Without proper networking or a VPN, remote agents can’t send messages to the Data Integration Platform Cloud server
When moving data between physical networks using a Data Integration Platform Cloud elevated task, the source and target network must be accessible to the Data Integration Platform Cloud remote agent. This can be accomplished through the proper networking rules or a VPN between the two networks.
See Set up an Agent.
Only the host agent works with the Data Preparation task
Data Preparation task does not work with a connection defined with a remote agent; you must use the host agent.
Delivery to Big Data is not supported with an agent without a Big Data component
You can't use the Big Data support that comes out of the box with the standalone agent in Data Integration Platform Cloud. Instead, when you download your agent from the Agents page of Data Integration Platform Cloud, ensure that you select the checkbox for the Big Data (OGG) component.
Oracle Data Integrator (ODI) Remote Agent Limitations
-
Only Linux based operating system is supported.
- The ODI Remote Agent is not available for Windows-based platforms, therefore Initial Load for Synchronize Data is not supported on Windows.
-
Linux root privileges are required if DIPC agent is downloaded with ODI plug-in for running configuration and start-up scripts.
-
Python is required to configure ODI remote agent.
-
Only one ODI remote agent can be configured per host.
-
Currently you cannot run
dicloudConfigureAgent.sh
using the recreate option for DIPC agent with ODI plug-in. -
DIPC remote agent configuration with ODI will take approximately 7–8 minutes. Also, RCU steps will run for 5–6 minutes.
-
ODI remote agent will install and configure MySQL instance, which uses port 3307. If another instance of MySQL or any other process is running on port 3307, it will block MySQL configuration and further ODI remote agent configuration will not be successful.
-
The error message during domain creation step of ODI agent configuration can be ignored. The error similar as shown below can be ignored:
"/scratch/ODIAGENT/dicloud/odihome/oracle_common/modules/internal/clients/oracle.fmwshare.pyjar.jar!/wlstScriptDir/lib/umsWlstUserPrefs.py" caused an error "Traceback (innermost last): File "<string>", line 1, in ? File "/scratch/ODIAGENT/dicloud/odihome/oracle_common/modules/internal/clients/oracle.fmwshare.pyjar.jar/wlstScriptDir/lib/umsWlstUserPrefs.py", line 20, in ? ImportError: no module named ucs " Error execing the Python script "/scratch/ODIAGENT/dicloud/odihome/oracle_common/modules/internal/clients/oracle.fmwshare.pyjar.jar!/wlstScriptDir/lib/MDS_handler.py" caused an error "Traceback (innermost last): File "<string>", line 1, in ? File "/scratch/ODIAGENT/dicloud/odihome/oracle_common/modules/internal/clients/oracle.fmwshare.pyjar.jar/wlstScriptDir/lib/MDS_handler.py", line 57, in ? ImportError: no module named deploy " This Exception occurred at Thu May 31 22:48:03 PDT 2018. java.util.MissingResourceException: Can't find bundle for base name jrf-config, locale en_US Error execing the Python script "/scratch/ODIAGENT/dicloud/odihome/oracle_common/modules/internal/clients/oracle.fmwshare.pyjar.jar!/wlstScriptDir/OracleJRF.py" caused an error "Traceback (innermost last): File "/scratch/ODIAGENT/dicloud/odihome/oracle_common/modules/internal/clients/oracle.fmwshare.pyjar.jar!/wlstScriptDir/OracleJRF.py", line 358, in ? NameError: jrf_getI18nMessage
Oracle Data Integrator (ODI) Execution Tasks Limitations
-
Complete query is not shown when the job fails. The steps shown are also incorrect and not job related.
-
ODI Remote Execution will fail while using File connection type along with remote agent when file path has a dot.
-
Tasks using remote agent might take some time to execute when multiple tasks are running simultaneously.
-
The
dicloudConfigureAgent.sh
agent must be run as root to install MySql for ODI. -
User might get Read Timed Out error during Remote Agent Mediator service operation. The jobs will be successful.
-
User might get Too Many Connections error while running ODI scenario on a remote agent.
-
For ODI Task with Oracle Connection to Oracle Autonomous Data Warehouse Cloud Connection scenario, user should select the File Schema wiring at the time of creating the Task and user should not modify or re-wire it later.
Updating ODI Tasks Java Security Options for Oracle Autonomous Data Warehouse Cloud
User needs to update the DIPC Agent Java settings for creating Oracle Autonomous Data Warehouse Cloud Tasks Executions. See the settings below:
-
Update JDK_HOME\jre\lib\security\java.security. For example,
C:\Program Files\Java\jdk1.8.0_144\jre\lib\security\java.security
-
Add the following entries, if not present:
-
security.provider.10=sun.security.mscapi.SunMSCAPI
-
security.provider.11=oracle.security.pki.OraclePKIProvider
-
-
Recreate Agent to Include New and Updated Components
Do to updates made to the Data Integration Platform Cloud Remote Agent in the November 2018 release, you must recreate the agent in order to include the new and updated components. New and update components include Data Preparation and ODI (Initial Load). Data Preparation and Synchronize Data with Initial Load will not work otherwise.
Agent is Unreachable But Error Message Indicates the Schema Cannot be Found
If you receive an error that indicates the schema cannot be found when you attempt to run a task, check that your agent is up and running. In this case, the error received does not match the error that occurred.
Random Selection of Agent for Running ODI Execution Task
An Agent is selected randomly if multiple Agents are available for running an ODI Execution task. All the Connections will be accessed from the selected Agent for the execution to be successful.
Change ODI Agent Memory Value For ODI Remote Execution
The default memory value used while starting or running an ODI Agent for Remote ODI Execution is -Xms1024m -Xmx4096m
- After configuring DIPC Agent with ODI plug-in, navigate to
<dipcagent_root>/odihome/user_projects/domains/WLS_ODI/bin/
- Open
setODIDomainEnv.sh
- Set
USER_MEM_ARGS
with required memory values. For example,USER_MEM_ARGS="-Xms32m -Xmx1024m -XX:MaxPermSize=256m
- Restart the DIPC Agent
Tasks
Here are the known issues related to tasks.
Topics
-
Initial Load for Synchronize Data task requires user-managed instances
-
Incorrect resultset for table names containing special characters
-
Synchronize Data task displays an incorrect number of inserts with Oracle database as a source
-
New Synchronize Data Tasks with Initial Load Require Download of New Agent
- Curly Braces in Table Names Throws an Error When Queried for Available Data Entities
ODI Execution can only import Patch Deployment Archives
You can only import Patch Deployment Archives with ODI Execution. Other types of deployment archives, such as Execution Deployment Archives are not supported.
Start Capture remains in Waiting state after Job restart
After a Synchronize Data or Replicate Data task is restarted, you may find that the Start Capture Job action remains in the Waiting state even though the GG components are up and running. You can verify the GG components on the VM for assurance.
Initial Load for Synchronize Data task requires user-managed instances
Creating a Synchronize Data task that requires initial load is currently only supported in user-managed Data Integration Platform Cloud with VPN. See Synchronize Data.
Tables not replicated in Synchronize Data Task
Due to a bug in GoldenGate, tables with names that include asterisks (*) or question marks (?) are not replicated.
There is no workaround.
Incorrect resultset for table names containing special characters
The Task filter API doesn’t return the expected resultset when table names contain percent signs (%) or underscores (_), even when the characters are escaped.
There is no workaround.
Synchronize Data task displays an incorrect number of inserts with Oracle database as a source
When you perform a Synchronize Data task that includes an initial load, the job detail shows a number of inserts that you haven't performed. For example, you insert one record in the source, the target then gets synchronized, and that same record is inserted in the target. However, when you see the job detail, the total number of inserts displays a bigger number, such as four instead of one. The reason for this behavior is that the Synchronize Data task creates some tables and performs some inserts for the task itself. These inserts are also counted in the Initial Load job detail.
To see the true number of inserts done by you, you must create two users. For details see Create Users for Oracle Database Connections.
All Data Pump limitations apply to Synchronize Data Task
As the Data Integration Platform Cloud Synchronize Data Task utilizes Data Pump for initial load, any limitation of Data Pump applies to the Synchronize Data Task as well.
There is no workaround.
Insufficient tablespace quota leads to error in Data Pump
Without the appropriate tablespace quota granted to the target schema for a Synchronize Data Task, Data pump produces the error, ORA-31626: job does not exist.
.
To avoid this, we recommend that you grant the user a quota of at least 2GB.
For example, alter user <target_schema> quota 2G on USERS
.
In some cases, you'll need to grant unlimited tablespace.
Initial Load Action State Transition
When you stop a job that includes Initial Load in the Prepared state, the Initial Load step gets stuck in the Being Stopped state, whereas the overall job status remains in the Running state.
Possible causes include:- Canceling the job before its execution, which puts the job in the Being Stopped state.
- Executing a type job to the Remote Agent that doesn't have the capability (required components are missing or wrong type of agent).
- Downloading a Remote Agent with only OGG components but runs a Synchronize Data Task with Initial Load (requires the ODI components also).
To workaround this issue, you can stop the job only if the steps are visible (the job is actually running on the Remote Agent).
Reboot of DIPC Agent doesn't Automatically Start MySQL Server after Remote Execution of Synchronize Data Task
When you execute a replication only Synchronize Data Task on a remote agent located behind a firewall, and then rebook the agent, the DIPC Agent doesn't restart MySQL server automatically. To restart the DIPC agent, manually restart mysqld using this command:
- setenv DIPC_AGENT_DIR=/scratch/dipcAgent
- $DIPC_AGENT_DIR/mysql_home/bin/mysqld --defaults-file=/etc/my.cnf --user=mysql --datadir=$DIPC_AGENT_DIR/mysql_home/data --basedir=$DIPC_AGENT_DIR/mysql_home --log-error=$DIPC_AGENT_DIR/mysql_home/log/mysql.err --pid-file=$DIPC_AGENT_DIR/mysql_home/mysql.pid --socket=$DIPC_AGENT_DIR/mysql_home/socket --port=3307
New Synchronize Data Tasks with Initial Load Require Download of New Agent
In order for new Synchronize Data tasks to run with Initial Load successfully, users must download and configure a new agent with Oracle 12c and ODI components.
Curly Braces in Table Names Throws an Error When Queried for Available Data Entities
When a table name containing a curly brace is entered into the search field for Available Data Entities, DIPC throws an error. The REST framework sees the curly brace as belonging to a REST URL template. Refrain from including curly braces in your table names to avoid this issue.
Monitor
Here are the known issues related to Monitor.
Topics
The Agent Health tile in the Monitor page of Data Integration Platform Cloud console displays correct information only for Synchronize Data and Replicate Data jobs
The Agent Health displays information sent from a Data Integration Platform Cloud plugin, which is only used for the Synchronize Data and Replicate Data jobs. All other jobs don't use this plugin;. therefore, the Agent Health doesn't display a correct status for them. For now, use this tile only to review Synchronize Data and Replicate Data jobs.
Currently this tile doesn't distinguish the Replicate Data and Synchronize Data jobs and shows the sum of all information sent for both of these tasks. For details on what information this tile displays, see Monitor Jobs.
When jobs are restarted the errors/warning information associated with Actions that are restarted are lost
There is no workaround for this issue.
The duration displayed for failed jobs is incorrect
Currently the duration of stopped and failed jobs displays the current time for that job's end time instead of when the job stopped due to failure. See Top Duration Report.
There is no workaround.
On-Premises
Here are the known issues related to on-premises.
Topics
-
Data Integration Platform Cloud’s GoldenGate data folder isn't included in the restore operation
-
Suspension of existing jobs on rollback of GoldenGate version 12.3 to 12.2
-
E2E use cases fail for Oracle 11g Database in GoldenGate version 12.3
-
Unable to create versions for Oracle Data Integrator (ODI) objects - Version feature is disabled
-
Oracle Enterprise Data Quality (EDQ) landing area files are not shared in a cluster
Data Integration Platform Cloud’s GoldenGate data folder isn't included in the restore operation
When you perform a restore operation, GoldenGate data that’s stored in the jlsData
folder, is not backed up.
To work around this issue:
-
Before you perform the restore operation, stop GoldenGate, and move GoldenGate data, located in the
/u01/data/domains/jlsData
folder, to a location under/u01/app/oracle.
#mv /u01/data/domains/jlsData /u01/app/oracle/suite/
-
After the restore operation has been completed, move the GoldenGate data back to its original location.
#mv /u01/app/oracle/suite/jslData /u01/data/domains/
Suspension of existing jobs on rollback of GoldenGate version 12.3 to 12.2
Upon upgrading a Data Integration Platform Cloud instance, the existing GoldenGate version 12.2 is upgraded to 12.3, and a checkpoint file specific to 12.3 is created. When a rollback is applied, GoldenGate is rolled back from version 12.3 to 12.2, but the existing jobs from version 12.3 do not resume. This is because the system cannot access version 12.3 of the checkpoint file.
As a workaround, configure new jobs after the rollback.
E2E use cases fail for Oracle 11g Database in GoldenGate version 12.3
On creating a synchronization task using the source and target connections created with the 11g database, and saving and running the task, the resulting E2E use case fails in GoldenGate version 12.3.
As a workaround, install Patch 17030189 on the Oracle mining database for trail format release 12.2 or later.
Install the patch by running GGHOME/prvtlmpg.plb
in SQL*Plus against any source user. Ensure that you run as sysdba
, and when prompted, enter the source user as the log mining user.
Unable to create versions for Oracle Data Integrator (ODI) objects - Version feature is disabled
In ODI, a version is a backup copy of an object. It is checked in at a given time and may be restored later. Versions are displayed in the Version tab of the object window. To create a version for an ODI object, perform the following steps.
-
Select the object for which you want to check in a version.
-
Go to the property inspector and select the Version tab. In the Versions table, click the Create a new version button.
-
Go to Previous Versions, in the Versioning dialog to see the list of versions already checked in.
-
A version number is automatically generated in the Version field. If required, you can change this version number.
-
Go to the Description field and enter the details for this version.
-
Click OK.
In this release, you’ll find the Version menu item is disabled.
There is no work around.
Oracle Enterprise Data Quality (EDQ) landing area files are not shared in a cluster
EDQ is supplied with internal File Transfer Protocol (FTP) and Secure File Transfer Protocol (SFTP) servers. These servers enable remote access to the configuration file area and landing area files. However, in Data Integration Platform Cloud’s clustered environment, the landing area is not shared between the servers; as a result, any files copied to the EDQ landing area are not visible in other VMs in the cluster.
To work around this issue:
-
Move the landing area to shared storage.
-
Ensure that the operating system user that runs the EDQ application has read and write access to it.
-
Update the
director.properties
file in the EDQ local home directory, and make surelandingarea=[path]
, where[path]
is the location of the landing area on shared storage.
EDQ application not accessible due to deployment failure
If you’re unable to access EDQ,
-
Log in to the WebLogic Admin console, and click Lock and Edit.
-
Go to Environment, select Clusters, and then select your cluster.
-
In the Coherence tab, select Local Storage Enabled, and then save your changes and release the Lock.
-
Under Environment, Clusters, your cluster name, go to the Control tab. Click Servers and then select Shutdown and restart.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Access to Oracle Support
Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
Oracle Cloud Known Issues for Data Integration Platform Cloud Service
E87303-14
Copyright © 2017, 2019, Oracle and/or its affiliates. All rights reserved.
Documentation for Oracle Data Integration Platform Cloud Service that describes possible known issues for each new version release.
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle.