This section lists Oracle Data Integrator tools by category.
This section lists Oracle Data Integrator tools in alphabetical order.
Use this command to execute an Ant buildfile.For more details and examples of Ant buildfiles, refer to the online documentation: http://jakarta.apache.org/ant/manual/index.html
Usage
OdiAnt -BUILDFILE=<file> -LOGFILE=<file> [-TARGET=<target>] [-D<property name>=<property value>]* [-PROJECTHELP] [-HELP] [-VERSION] [-QUIET] [-VERBOSE] [-DEBUG] [-EMACS] [-LOGGER=<classname>] [-LISTENER=<classname>] [-FIND=<file>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Ant buildfile. XML file containing the Ant commands. |
|
Yes |
Use given file for logging. |
|
No |
Target of the build process. |
|
No |
List of properties with their values. |
|
No |
Displays the help on the project. |
|
No |
Displays Ant help. |
|
No |
Displays Ant version. |
|
No |
Run in nonverbose mode. |
|
No |
Run in verbose mode. |
|
No |
Prints debug information. |
|
No |
Displays the logging information without adornments. |
|
No |
Java class performing the logging. |
|
No |
Adds a class instance as a listener. |
|
No |
Looks for the Ant buildfile from the root of the file system and uses it. |
Examples
Download the *.html
files from the directory /download/public
using FTP from ftp.mycompany.com
to the directory C:\temp
.
Step 1: Generate the Ant buildfile.
OdiOutFile -FILE=c:\temp\ant_cmd.xml <?xml version="1.0"?> <project name="myproject" default="ftp" basedir="/"> <target name="ftp"> <ftp action="get" remotedir="/download/public" server="ftp.mycompany.com" userid="anonymous" password="me@mycompany.com"> <fileset dir="c:\temp"> <include name="**/*.html"/> </fileset> </ftp> </target> </project>
Step 2: Run the Ant buildfile.
OdiAnt -BUILDFILE=c:\temp\ant_cmd.xml -LOGFILE=c:\temp\ant_cmd.log
Use this command to apply an Initial/Patch Deployment Archive (DA) onto an ODI repository.
Usage
OdiApplyDeploymentArchive -ARCHIVE_FILE_NAME=<archive_file_name> [-APPLY_WITHOUT_CIPHER_DATA=<yes|no>] [-EXPORT_KEY=<Export_Key>] [-CREATE_ROLLBACK_ARCHIVE=<yes|no>] [-ROLLBACK_FILE_NAME=<rollback_file_name>] [-INCLUDE_PHYSICAL_TOPOLOGY=<yes|no>]
Parameters
Parameter | Mandatory | Description |
---|---|---|
|
Yes |
Full path/Complete name of the deployment archive zip file. |
|
NoFoot 1 |
If set to yes, any cipher data present in the deployment archive will be made null. If set to no, the export key will be used to migrate the cipher data. The default value is No. |
|
No |
Specifies a cryptographic private key used to migrate cipher data in the deployment archive objects.
Note: TheEXPORT_KEY parameter should be an encrypted string. For information on the encoding process, see Encoding a Password in Administering Oracle Data Integrator. |
|
NoFoot 2 |
Specifies if a rollback deployment archive must be created. If set to Yes, a rollback deployment archive will be created before applying the patch. If set to No, the rollback deployment archive will not be created. Note: This option is applicable only to the patch deployment archive. |
|
No |
Complete file name of the rollback deployment archive. |
|
No |
Specifies if the Physical Topology Objects in the deployment archive should be applied onto the target repository. The default value is Yes. |
Footnote 1
If the APPLY_WITHOUT_CIPHER_DATA
parameter is set to No, the EXPORT_KEY
parameter must be specified.
Footnote 2
If the CREATE_ROLLBACK_ARCHIVE
parameter is set to Yes, the ROLLBACK_FILE_NAME
parameter must be specified.
Examples
Patch a repository using a patch deployment archive using export key and create a rollback deployment archive.
OdiApplyDeploymentArchive -ARCHIVE_FILE_NAME=archive_file_name -APPLY_WITHOUT_CIPHER_DATA=no -EXPORT_KEY=Export_Key -CREATE_ROLLBACK_ARCHIVE=yes -ROLLBACK_FILE_NAME=rollback_file_name -INCLUDE_PHYSICAL_TOPOLOGY=yes
Use this command to play a default beep or sound file on the machine hosting the agent.
The following file formats are supported by default:
WAV
AIF
AU
Note:
To play other file formats, you must add the appropriate JavaSound Service Provider Interface (JavaSound SPI) to the application classpath.
Usage
OdiBeep [-FILE=<sound_file>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
No |
Path and file name of sound file to be played. If not specified, the default beep sound for the machine is used. |
Examples
OdiBeep -FILE=c:\wav\alert.wav
Use this command to create a Deployment Archive (DA) from the ODI repository or VCS label/tag.
Usage
In SVN
OdiCreateDeploymentArchive -ARCHIVE_NAME=<archive_name> -ARCHIVE_FILE_NAME=<archive_file_name> [-SOURCE_TYPE=VCS|ODI] [-ARCHIVE_TYPE=INITIAL|PATCH] [-CREATE_WITHOUT_CIPHER_DATA=<yes|no>] [-EXPORT_KEY=<Export_Key>] [–VCS_LABEL=<vcs_label>] [-VCS_TYPE=<vcs_type>] [-VCS_AUTH_TYPE=<vcs_auth_type>] [-VCS_URL=<vcs_url>] [-VCS_USER=<vcs_user>] [-VCS_PASS=<vcs_pass>] [-VCS_PROXY_HOST=<vcs_proxy_host>] [-VCS_PROXY_PORT=<vcs_proxy_port>] [-VCS_PROXY_USER=<vcs_proxy_user>] [-VCS_PROXY_PASS=<vcs_proxy_pass>] [-INCLUDE_PHYSICAL_TOPOLOGY=<yes|no>]
In Git
OdiCreateDeploymentArchive -ARCHIVE_NAME=<archive_name> -ARCHIVE_FILE_NAME=<archive_file_name> [-SOURCE_TYPE=VCS|ODI] [-ARCHIVE_TYPE=INITIAL|PATCH] [-CREATE_WITHOUT_CIPHER_DATA=<yes|no>] [-EXPORT_KEY=<Export_Key>] [-DESCRIPTION=<Description>] [–VCS_TAG=<vcs_tag>] [-VCS_TYPE=<vcs_type>] [-VCS_AUTH_TYPE=<vcs_auth_type>] [-VCS_URL=<vcs_url>] [-VCS_USER=<vcs_user>] [-VCS_PASS=<vcs_pass>] [-VCS_PROXY_HOST=<vcs_proxy_host>] [-VCS_PROXY_PORT=<vcs_proxy_port>] [-VCS_PROXY_USER=<vcs_proxy_user>] [-VCS_PROXY_PASS=<vcs_proxy_pass>] [-VCS_SSH_PRIVATE_KEY_PATH=<vcs_ssh_private_key_path>] [-VCS_SSH_PASS_PHRASE=<vcs_ssh_pass_phrase>] [-VCS_SSH_PORT=<vcs_ssh_port>] [-VCS_SSL_CERT_PATH=<vcs_ssl_cert_path>] [-VCS_SSL_PASS_PHRASE=<vcs_ssl_pass_phrase>] [-INCLUDE_PHYSICAL_TOPOLOGY=<yes|no>]
Parameters
Parameter | Mandatory | Description |
---|---|---|
|
Yes |
Name of deployment archive. |
|
Yes |
Full path/Complete name of the deployment archive zip file. |
|
NoFoot 3 |
Source from which the deployment archive needs to be created. The source can be:
|
|
No |
VCS tag name. |
|
No |
Type of deployment archive. Can be |
|
NoFoot 4 |
If set to Yes, any cipher data present in the deployment archive will be made null. If set to No, the export key will be used to migrate the cipher data. The default value is No. |
|
No |
Specifies a cryptographic private key used to encrypt cipher data in the deployment archive objects. |
|
No |
VCS label name. |
|
No |
Description for this deployment archive. |
|
No |
Type of VCS. Can be SVN or Git. |
|
NoFoot 5 |
Authentication type of the VCS used. The value is:
|
|
No |
VCS URL. |
|
No |
VCS Username. |
|
No |
VCS Password. |
|
No |
VCS Proxy Host. |
|
No |
VCS Proxy Port. |
|
No |
VCS Proxy User. |
|
No |
VCS Proxy Password. |
|
No |
VCS SSH private key file path, in case of private key authentication. |
|
No |
VCS SSH pass phrase, if provided during private key generation. |
|
No |
VCS SSH port. |
|
No |
VCS HTTP SSL certificate path. |
|
No |
VCS SSL pass phrase. |
|
No |
Specifies if the Physical Topology Objects in the repository should be included in the deployment archive. The default value is Yes. |
Footnote 3
If the SOURCE_TYPE
parameter is specified as VCS
, the VCS_TAG
/VCS_LABEL
parameter must be specified.
Footnote 4
If the CREATE_WITHOUT_CIPHER_DATA
parameter is set to No, the EXPORT_KEY
parameter must be specified.
Footnote 5
If the VCS_AUTH_TYPE
parameter is specified as GITBASIC
or SVNBASIC
, the VCS_URL
, VCS_USER
, and VCS_PASS
parameters must be specified.
If the VCS_AUTH_TYPE
parameter is specified as SVNBASIC
, the VCS_SSH_PORT
parameter must be specified.
If the VCS_AUTH_TYPE
parameter is specified as HTTPPROXY
, the VCS_PROXY_HOST
, VCS_PROXY_PORT
, VCS_PROXY_USER
, and VCS_PROXY_PASS
parameters must be specified.
If the VCS_AUTH_TYPE
parameter is specified as GIT_SSH
, the VCS_SSH_PRIVATE_KEY_PATH
and VCS_SSH_PASS_PHRASE
parameters must be specified.
If the VCS_AUTH_TYPE
parameter is specified as GIT_SSL
, the VCS_SSL_CERT_PATH
and VCS_SSL_PASS_PHRASE
parameters must be specified.
Examples
Create a patch deployment archive from SVN label with cipher.
OdiCreateDeploymentArchive -ARCHIVE_NAME=archive_name -ARCHIVE_FILE_NAME=archive_file_name -SOURCE_TYPE=VCS -ARCHIVE_TYPE=PATCH -CREATE_WITHOUT_CIPHER_DATA=no -EXPORT_KEY=Export_Key –VCS_LABEL=vcs_label -VCS_TYPE=SVN -VCS_AUTH_TYPE=BASIC -VCS_URL=vcs_url -VCS_USER=vcs_user -VCS_PASS=vcs_pass -INCLUDE_PHYSICAL_TOPOLOGY=yes
Create an initial deployment archive from a Git tag.
OdiCreateDeploymentArchive -ARCHIVE_NAME=<archive_name> -ARCHIVE_TYPE=INITIAL -SOURCE_TYPE=VCS –VCS_TAG=<vcs_tag> -ARCHIVE_FILE_NAME=<archive_file_name> -CREATE_WITHOUT_CIPHER_DATA=<yes|no> -EXPORT_KEY=<Export_Key> -INCLUDE_PHYSICAL_TOPOLOGY=<yes|no> -VCS_URL=<vcs_url> -VCS_USER=<vcs_user> -VCS_PASS=<vcs_pass>
Use this command to delete a given scenario version.
Usage
OdiDeleteScen -SCEN_NAME=<name> -SCEN_VERSION=<version>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Name of the scenario to delete. |
|
Yes |
Version of the scenario to delete. |
Examples
Delete the DWH
scenario in version 001
.
OdiDeleteScen -SCEN_NAME=DWH -SCEN_VERSION=001
Use this command to invoke an Oracle Enterprise Data Quality (Datanomic) job.
Note:
The OdiEnterpriseDataQuality tool supports Oracle Enterprise Data Quality version 8.1.6 and later.
Usage
OdiEnterpriseDataQuality "-JOB_NAME=<EDQ job name>" "-PROJECT_NAME=<EDQ project name>" "-CONTEXT=<context>" "-LSCHEMA=<logical_schema>" "-SYNCHRONOUS=<yes|no>"
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Name of the Enterprise Data Quality job. |
|
Yes |
Name of the Enterprise Data Quality project. |
|
No |
If set to Yes (default), the tool waits for the quality process to complete before returning, with possible error code. If set to No, the tool ends immediately with success and does not wait for the quality process to complete. |
Examples
Execute the Enterprise Data Quality job CLEANSE_CUSTOMERS
located in the project CUSTOMERS
.
OdiEnterpriseDataQuality "-JOB_NAME=CLEANSE_CUSTOMERS" "-PROJECT_NAME=CUSTOMERS" "-CONTEXT=Development" "-LSCHEMA=EDQ Logical Schema" "-SYNCHRONOUS=yes"
Use this command to export a group of scenarios from the connected repository.
The export files are named SCEN_<scenario name><scenario version>.xml
. This command reproduces the behavior of the export feature available in Designer Navigator and Operator Navigator.
Usage
OdiExportAllScen -TODIR=<directory> [-FORCE_OVERWRITE=<yes|no>] [-FROM_PROJECT=<project_id>] [-FROM_FOLDER=<folder_id>] [-FROM_PACKAGE=<package_id>] [-RECURSIVE_EXPORT=<yes|no>] [-XML_VERSION=<1.0>] [-XML_CHARSET=<charset>] [-JAVA_CHARSET=<charset>] [-EXPORT_KEY=<key>] [-EXPORT_MAPPING=<yes|no>] [-EXPORT_PACK=<yes|no>] [-EXPORT_POP=<yes|no>] [-EXPORT_TRT=<yes|no>] [-EXPORT_VAR=<yes|no>] [EXPORT_WITHOUT_CIPHER_DATA=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Directory into which the export files are created. |
|
No |
If set to Yes, existing export files are overwritten without warning. The default value is No. |
|
No |
ID of the project containing the scenarios to export. This value is the Global ID that displays in the Version tab of the project window in Studio. If this parameter is not set, scenarios from all projects are taken into account for the export. |
|
No |
ID of the folder containing the scenarios to export. This value is the Global ID that displays in the Version tab of the folder window in Studio. If this parameter is not set, scenarios from all folders are taken into account for the export. |
|
No |
ID of the source package of the scenarios to export. This value is the Global ID that displays in the Version tab of the package window in Studio. If this parameter is not set, scenarios from all components are taken into account for the export. |
|
No |
If set to Yes (default), all child objects (schedules) are exported with the scenarios. |
|
No |
Sets the XML version shown in the XML header. The default value is |
|
No |
Encoding specified in the XML export file in the tag
|
|
No |
Target file encoding. The default value is
|
|
NoFoot 6 |
Specifies a cryptographic private key used to encrypt sensitive cipher data. You must specify this key again when importing the exported object in order to import the cipher data. |
|
No |
Indicates if the mapping scenarios should be exported. The default value is No. |
|
No |
Indicates if the scenarios attached to packages should be exported. The default value is Yes. |
|
No |
Indicates if the scenarios attached to mappings should be exported. The default value is No. |
|
No |
Indicates if the scenarios attached to procedures should be exported. The default value is No. |
|
No |
Indicates if the scenarios attached to variables should be exported. The default value is No. |
|
NoFoot 7 |
When set to Yes, specifies that sensitive (cipher) values should be set to null in the object when it is exported. When set to No or when this parameter is omitted, you must include the |
Footnote 6
If the -EXPORT_KEY
parameter is not specified, the -EXPORT_WITHOUT_CIPHER_DATA
parameter must be specified, and must be set to Yes.
Footnote 7
If -EXPORT_WITHOUT_CIPHER_DATA
is not specified, or if it is specified and set to No, you must specify the -EXPORT_KEY
parameter with a valid key value.
Examples
Export all scenarios from the DW01
project of Global ID 2edb524d-eb17-42ea-8aff-399ea9b13bf3
into the /temp/
directory, with all dependent objects, using the key examplekey1
to encrypt sensitive data.
OdiExportAllScen -FROM_PROJECT=2edb524d-eb17-42ea-8aff-399ea9b13bf3 -TODIR=/temp/ -RECURSIVE_EXPORT=yes -EXPORT_KEY=examplekey1
Use this command to export the details of the technical environment into a comma separated (.csv
) file into the directory of your choice. This information is required for maintenance or support purposes.
Usage
OdiExportEnvironmentInformation -TODIR=<toDir> -FILE_NAME=<FileName> [-CHARSET=<charset>] [-SNP_INFO_REC_CODE=<row_code>] [-MASTER_REC_CODE=<row_code>] [-WORK_REC_CODE=<row_code>] [-AGENT_REC_CODE=<row_code>] [-TECHNO_REC_CODE=<row_code>] [-RECORD_SEPARATOR_HEXA=<rec_sep>] [-FIELD_SEPARATOR_HEXA=<field_sep] [-TEXT_SEPARATOR=<text_sep>]
Parameter
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Target directory for the export. |
|
Yes |
Name of the CSV export file. The default value is |
|
No |
Character set of the export file. |
|
No |
Code used to identify rows that describe the current version of Oracle Data Integrator and the current user. This code is used in the first field of the record. The default value is |
|
No |
Code for rows containing information about the master repository. The default value is |
|
No |
Code for rows containing information about the work repository. The default value is |
|
No |
Code for rows containing information about the various agents that are running. The default value is |
|
No |
Code for rows containing information about the data servers, their versions, and so on. The default value is |
|
No |
One or several characters in hexadecimal code separating lines (or records) in the file. The default value is |
|
No |
One or several characters in hexadecimal code separating the fields in a record. The default value is |
|
No |
Character in hexadecimal code delimiting a |
Examples
Export the details of the technical environment into the /temp/snps_tech_inf.csv
export file.
OdiExportEnvironmentInformation "-TODIR=/temp/" "-FILE_NAME=snps_tech_inf.csv" "-CHARSET=ISO8859_1" "-SNP_INFO_REC_CODE=SUNOPSIS" "-MASTER_REC_CODE=MASTER" "-WORK_REC_CODE=WORK" "-AGENT_REC_CODE=AGENT" "-TECHNO_REC_CODE=TECHNO" "-RECORD_SEPARATOR_HEXA=0D0A" "-FIELD_SEPARATOR_HEXA=2C" "-TEXT_SEPARATOR_HEXA=22"
Use this command to export the execution log into a ZIP export file.
Usage
OdiExportLog -TODIR=<toDir> [-EXPORT_TYPE=<logsToExport>] [-EXPORT_KEY=<key>] [-ZIPFILE_NAME=<zipFileName>] [-XML_CHARSET=<charset>] [-JAVA_CHARSET=<charset>] [-FROMDATE=<from_date>] [-TODATE=<to_date>] [-AGENT=<agent>] [-CONTEXT=<context>] [-STATUS=<status>] [-USER_FILTER=<user>] [-NAME=<sessionOrLoadPlanName>] [EXPORT_WITHOUT_CIPHER_DATA=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
No |
Export the log of:
|
|
NoFoot 8 |
Specifies a cryptographic private key used to encrypt sensitive cipher data. You must specify this key again when importing the exported object in order to import the cipher data. |
|
Yes |
Target directory for the export. |
|
No |
Name of the compressed file. |
|
No |
XML version specified in the export file. Parameter
|
|
No |
Result file Java character encoding. The default value is
|
|
No |
Beginning date for the export, using the format yyyy/MM/dd hh:mm:ss. All sessions from this date are exported. |
|
No |
End date for the export, using the format yyyy/MM/dd hh:mm:ss. All sessions to this date are exported. |
|
No |
Exports only sessions executed by the agent |
|
No |
Exports only sessions executed in the context code |
|
No |
Exports only sessions in the specified state. Possible states are Done, Error, Queued, Running, Waiting, and Warning. |
|
No |
Exports only sessions launched by |
|
No |
Name of the session or Load Plan to be exported. |
|
NoFoot 9 |
When set to Yes, specifies that sensitive (cipher) values should be set to null in the object when it is exported. When set to No or when this parameter is omitted, you must include the |
Footnote 8
If the -EXPORT_KEY
parameter is not specified, the -EXPORT_WITHOUT_CIPHER_DATA
parameter must be specified, and must be set to Yes.
Footnote 9
If -EXPORT_WITHOUT_CIPHER_DATA
is not specified, or if it is specified and set to No, you must specify the -EXPORT_KEY
parameter with a valid key value.
Examples
Export and compress the log into the /temp/log2.zip
export file.
OdiExportLog "-EXPORT_TYPE=ALL" "-EXPORT_KEY=examplekey1" "-TODIR=/temp/" "-ZIPFILE_NAME=log2.zip" "-XML_CHARSET=ISO-8859-1" "-JAVA_CHARSET=ISO8859_1"
Use this command to export the master repository to a directory or ZIP file. The versions and/or solutions stored in the master repository are optionally exported.
Usage
OdiExportMaster -TODIR=<toDir> [-ZIPFILE_NAME=<zipFileName>] [-EXPORT_KEY=<key>] [-EXPORT_SOLUTIONS=<yes|no>] [-EXPORT_VERSIONS=<yes|no>] [-XML_CHARSET=<charset>] [-JAVA_CHARSET=<charset>] [EXPORT_WITHOUT_CIPHER_DATA=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Target directory for the export. |
|
No |
Name of the compressed file. |
|
NoFoot 10 |
Specifies a cryptographic private key used to encrypt sensitive cipher data. You must specify this key again when importing the exported object in order to import the cipher data. |
|
No |
Exports all solutions that are stored in the repository. The default value is No. |
|
No |
Exports all versions of objects that are stored in the repository. The default value is No. |
|
No |
XML version specified in the export file. Parameter
|
|
No |
Result file Java character encoding. The default value is
|
|
NoFoot 11 |
When set to Yes, specifies that sensitive (cipher) values should be set to null in the object when it is exported. When set to No or when this parameter is omitted, you must include the |
Footnote 10
If the -EXPORT_KEY
parameter is not specified, the -EXPORT_WITHOUT_CIPHER_DATA
parameter must be specified, and must be set to Yes.
Footnote 11
If -EXPORT_WITHOUT_CIPHER_DATA
is not specified, or if it is specified and set to No, you must specify the -EXPORT_KEY
parameter with a valid key value.
Examples
Export and compress the master repository into the export.zip
file located in the /temp/
directory.
OdiExportMaster "-TODIR=/temp/" "-ZIPFILE_NAME=export.zip" "-EXPORT_KEY=examplekey1" "-XML_CHARSET=ISO-8859-1" "-JAVA_CHARSET=ISO8859_1" "-EXPORT_VERSIONS=YES"
Use this command to export an object from the current repository. This command reproduces the behavior of the export feature available in the user interface.
Usage
OdiExportObject -CLASS_NAME=<class_name> -I_OBJECT=<object_id> [-EXPORT_KEY=<key>] [-EXPORT_DIR=<directory>] [-EXPORT_NAME=<export_name>|-FILE_NAME=<file_name>] [-FORCE_OVERWRITE=<yes|no>] [-RECURSIVE_EXPORT=<yes|no>] [-XML_VERSION=<1.0>] [-XML_CHARSET=<charset>] [-JAVA_CHARSET=<charset>] [EXPORT_WITHOUT_CIPHER_DATA=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Class of the object to export (see the following list of classes). |
|
Yes |
Object identifier. This value is the Global ID that displays in the Version tab of the object edit window. |
|
NoFoot 12 |
Specifies a cryptographic private key used to encrypt sensitive cipher data. You must specify this key again when importing the exported object in order to import the cipher data. |
|
No |
Export file name. Absolute path or relative path from This file name may or may not comply with the Oracle Data Integrator standard export file prefix and suffix. To comply with these standards, use the |
|
No |
Directory where the object will be exported. The export file created in this directory is named based on the If |
|
No |
Export name. Use this parameter to generate an export file named |
|
No |
If set to Yes, an existing export file with the same name is forcibly overwritten. The default value is No. |
|
No |
If set to Yes (default), all child objects are exported with the current object. For example, if exporting a project, all folders, KMs, and so on in this project are exported into the project export file. |
|
No |
Sets the XML version that appears in the XML header. The default value is |
|
No |
Encoding specified in the XML file, in the tag
|
|
No |
Target file encoding. The default value is
|
|
NoFoot 13 |
When set to Yes, specifies that sensitive (cipher) values should be set to null in the object when it is exported. When set to No or when this parameter is omitted, you must include the |
Footnote 12
If the -EXPORT_KEY
parameter is not specified, the -EXPORT_WITHOUT_CIPHER_DATA
parameter must be specified, and must be set to Yes.
Footnote 13
If -EXPORT_WITHOUT_CIPHER_DATA
is not specified, or if it is specified and set to No, you must specify the -EXPORT_KEY
parameter with a valid key value.
List of Classes
Object | Class Name |
---|---|
Column |
SnpCol |
Condition/Filter |
SnpCond |
Context |
SnpContext |
Data Server |
SnpConnect |
Datastore |
SnpTable |
Folder |
SnpFolder |
Interface |
SnpPop |
Language |
SnpLang |
Loadplan |
SnpLoadPlan |
Mapping |
SnpMapping |
Model |
SnpModel |
Package |
SnpPackage |
Physical Schema |
SnpPschema |
Procedure or KM |
SnpTrt |
Procedure or KM Option |
SnpUserExit |
Project |
SnpProject |
Reference |
SnpJoin |
Reusable Mapping |
SnpMapping |
Scenario |
SnpScen |
Sequence |
SnpSequence |
Step |
SnpStep |
Sub-Model |
SnpSubModel |
Technology |
SnpTechno |
User Functions |
SnpUfunc |
Variable |
SnpVar |
Version of an Object |
SnpVer |
Examples
Export the DW01
project of Global ID 2edb524d-eb17-42ea-8aff-399ea9b13bf3
into the /temp/dw1.xml
export file, with all dependent objects.
OdiExportObject -CLASS_NAME=SnpProject -I_OBJECT=2edb524d-eb17-42ea-8aff-399ea9b13bf3 -EXPORT_KEY=examplekey1 -FILE_NAME=/temp/dw1.xml -FORCE_OVERWRITE=yes -RECURSIVE_EXPORT=yes
Use this command to export a scenario from the current work repository.
Usage
OdiExportScen -SCEN_NAME=<scenario_name> -SCEN_VERSION=<scenario_version> [-EXPORT_KEY=<key>] [-EXPORT_DIR=<directory>] [-FILE_NAME=<file_name>|EXPORT_NAME=<export_name>] [-FORCE_OVERWRITE=<yes|no>] [-RECURSIVE_EXPORT=<yes|no>] [-XML_VERSION=<1.0>] [-XML_CHARSET=<encoding>] [-JAVA_CHARSET=<encoding>] [EXPORT_WITHOUT_CIPHER_DATA=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Name of the scenario to be exported. |
|
Yes |
Version of the scenario to be exported. |
|
NoFoot 14 |
Specifies a cryptographic private key used to encrypt sensitive cipher data. You must specify this key again when importing the exported object in order to import the cipher data. |
|
Yes |
Export file name. Absolute path or relative path from This file name may or not comply with the Oracle Data Integrator standard export file prefix and suffix for scenarios. To comply with these standards, use the |
|
No |
Directory where the scenario will be exported. The export file created in this directory is named based on the If |
|
No |
Export name. Use this parameter to generate an export file named |
|
No |
If set to Yes, overwrites the export file if it already exists. The default value is No. |
|
No |
Forces the export of the objects under the scenario. The default value is Yes. |
|
No |
Version specified in the generated XML file, in the tag |
|
No |
Encoding specified in the XML file, in the tag
|
|
No |
Target file encoding. The default value is
|
|
NoFoot 15 |
When set to Yes, specifies that sensitive (cipher) values should be set to null in the object when it is exported. When set to No or when this parameter is omitted, you must include the |
Footnote 14
If the -EXPORT_KEY
parameter is not specified, the -EXPORT_WITHOUT_CIPHER_DATA
parameter must be specified, and must be set to Yes.
Footnote 15
If -EXPORT_WITHOUT_CIPHER_DATA
is not specified, or if it is specified and set to No, you must specify the -EXPORT_KEY
parameter with a valid key value.
Examples
Export the LOAD_DWH
scenario in version 1
into the /temp/load_dwh.xml
export file, with all dependent objects.
OdiExportScen -SCEN_NAME=LOAD_DWH -SCEN_VERSION=1 -EXPORT_KEY=examplekey1 -FILE_NAME=/temp/load_dwh.xml -RECURSIVE_EXPORT=yes
Use this command to export the work repository to a directory or ZIP export file.
Usage
OdiExportWork -TODIR=<directory> [-ZIPFILE_NAME=<zipFileName>] [-EXPORT_KEY=<key>] [-XML_CHARSET=<charset>] [-JAVA_CHARSET=<charset>] [EXPORT_WITHOUT_CIPHER_DATA=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Target directory for the export. |
|
No |
Name of the compressed file. |
|
NoFoot 16 |
Specifies a cryptographic private key used to encrypt sensitive cipher data. You must specify this key again when importing the exported object in order to import the cipher data. |
|
No |
XML version specified in the export file. Parameter
|
|
No |
Result file Java character encoding. The default value is
|
|
NoFoot 17 |
When set to Yes, specifies that sensitive (cipher) values should be set to null in the object when it is exported. When set to No or when this parameter is omitted, you must include the |
Footnote 16
If the -EXPORT_KEY
parameter is not specified, the -EXPORT_WITHOUT_CIPHER_DATA
parameter must be specified, and must be set to Yes.
Footnote 17
If -EXPORT_WITHOUT_CIPHER_DATA
is not specified, or if it is specified and set to No, you must specify the -EXPORT_KEY
parameter with a valid key value.
Examples
Export and compress the work repository into the /temp/workexport.zip
export file.
OdiExportWork "-TODIR=/temp/" "-ZIPFILE_NAME=workexport.zip" "-EXPORT_KEY=examplekey1"
Use this command to concatenate a set of files into a single file.
Usage
OdiFileAppend -FILE=<file> -TOFILE=<target_file> [-OVERWRITE=<yes|no>] [-CASESENS=<yes|no>] [-HEADER=<n>] [-KEEP_FIRST_HEADER=<yes|no]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Full path of the files to concatenate. Use Examples:
The file location is always relative to the data schema directory of its logical schema. |
|
Yes |
Target file. |
|
No |
Indicates if the target file must be overwritten if it already exists. The default value is No. |
|
No |
Indicates if file search is case-sensitive. By default, Oracle Data Integrator searches files in uppercase (set to No). |
|
No |
Number of header lines to be removed from the source files before concatenation. By default, no lines are removed. When the |
|
No |
Keep the header lines of the first file during the concatenation. The default value is Yes. |
Examples
Concatenate the files *.log
of the folder /var/tmp
into the file /home/all_files.log
.
OdiFileAppend -FILE=/var/tmp/*.log -TOFILE=/home/all_files.log
Use this command to delete files or directories.
The most common uses of this tool are described in the following table where:
x means is supplied
o means is omitted
-DIR | -FILE | -RECURSE | Behavior |
---|---|---|---|
x |
x |
x |
Every file with the name or with a name matching the mask specified in |
x |
o |
x |
The subdirectories from |
x |
x |
o |
Every file with the name or with a name matching the mask specified in |
x |
o |
o |
The |
Usage
OdiFileDelete -DIR=<directory> -FILE=<file> [-RECURSE=<yes|no>] [-CASESENS=<yes|no>] [-NOFILE_ERROR=<yes|no>] [-FROMDATE=<from_date>] [-TODATE=<to_date>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes if |
If If The directory location is always relative to the data schema directory of its logical schema. |
|
Yes if |
Name or mask of file(s) to delete. If Examples:
The file location is always relative to the data schema directory of its logical schema. |
|
No |
If If The default value is Yes. |
|
No |
Specifies that Oracle Data Integrator should distinguish between uppercase and lowercase when matching file names. The default value is No. |
|
Yes |
Indicates that an error should be generated if the specified directory or files are not found. The default value is Yes. |
|
No |
All files with a modification date later than this date are deleted. Use the format yyyy/MM/dd hh:mm:ss. The If If both |
|
No |
All files with a modification date earlier than this date are deleted. Use the format yyyy/MM/dd hh:mm:ss. The If If both |
Note:
You cannot delete a file and a directory at the same time by combining the -DIR
and -FILE
parameters. To achieve that, you must make two calls to OdiFileDelete.
Examples
Delete the file my_data.dat
from the directory c:\data\input
, generating an error if the file or directory is missing.
OdiFileDelete -FILE=c:\data\input\my_data.dat -NOFILE_ERROR=yes
Delete all .txt
files from the bin
directory, but not .TXT
files.
OdiFileDelete "-FILE=c:\Program Files\odi\bin\*.txt" -CASESENS=yes
This statement has the same effect:
OdiFileDelete "-DIR=c:\Program Files\odi\bin" "-FILE=*.txt" -CASESENS=yes
Delete the directory /bin/usr/nothingToDoHere
.
OdiFileDelete "-DIR=/bin/usr/nothingToDoHere"
Delete all files under the C:\temp
directory whose modification time is between 10/01/2008 00:00:00
and 10/31/2008 22:59:00
, where 10/01/2008
and 10/31/2008
are not inclusive.
OdiFileDelete -DIR=C:\temp -FILE=* -NOFILE_ERROR=NO -FROMDATE=FROMDATE=10/01/2008 00:00:00 -TODATE=10/31/2008 22:59:00
Delete all files under the C:\temp
directory whose modification time is earlier than 10/31/2008 17:00:00
.
OdiFileDelete -DIR=C:\temp -FILE=* -NOFILE_ERROR=YES -TODATE=10/31/2008 17:00:00
Delete all files under the C:\temp
directory whose modification time is later than 10/01/2008 08:00:00
.
OdiFileDelete -DIR=C:\temp -FILE=* -NOFILE_ERROR=NO -FROMDATE=10/01/2008 08:00:00
Use this command to copy files or folders.
Usage
OdiFileCopy -DIR=<directory> -TODIR=<target_directory> [-OVERWRITE=<yes|no>] [-RECURSE=<yes|no>] [-CASESENS=<yes|no>] OdiFileCopy -FILE=<file> -TOFILE=<target_file>|-TODIR=<target_directory> [-OVERWRITE=<yes|no>] [-RECURSE=<yes|no>] [-CASESENS=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes if |
Directory (or folder) to copy. The directory location is always relative to the data schema directory of its logical schema. |
|
Yes if |
The full path of the files to copy. Use Examples:
The file location is always relative to the data schema directory of its logical schema. |
|
Yes if |
Target directory for the copy. If a directory is copied ( If one or several files are copied ( |
|
Yes if |
Destination file(s). This parameter cannot be used with parameter This parameter contains:
Note that |
|
No |
The file located on a data server, based on the Logical Schema value. For example, the LSCHEMA may point to a Hadoop Data Server and the tool will access the file from that data server if the file needs to be accessed from HDFS. |
|
No |
Indicates if the files of the folder are overwritten if they already exist. The default value is No. |
|
No |
Indicates if files are copied recursively when the directory contains other directories. The value No indicates that only the files within the directory are copied, not the subdirectories. The default value is Yes. |
|
No |
Indicates if file search is case-sensitive. By default, Oracle Data Integrator searches for files in uppercase (set to No). |
Examples
Copy the file hosts
from the directory /etc
to the directory /home
.
OdiFileCopy -FILE=/etc/hosts -TOFILE=/home/hosts
Copy all *.csv
files from the directory /etc
to the directory /home
and overwrite.
OdiFileCopy -FILE=/etc/*.csv -TODIR=/home -OVERWRITE=yes
Use this command to move or rename files or a directory into files or a directory.
Usage
OdiFileMove -FILE=<file> -TODIR=<target_directory> -TOFILE=<target_file> [-OVERWRITE=<yes|no>] [-RECURSE=<yes|no>] [-CASESENS=<yes|no>] OdiFileMove -DIR=<directory> -TODIR=<target_directory> [-OVERWRITE=<yes|no>] [-RECURSE=<yes|no>] [-CASESENS=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes if |
Directory (or folder) to move or rename. The directory location is always relative to the data schema directory of its logical schema. |
|
Yes if |
Full path of the file(s) to move or rename. Use Examples:
The file location is always relative to the data schema directory of its logical schema. |
|
Yes if |
Target directory of the move. If a directory is moved ( If a file or several files are moved ( |
|
Yes if |
Target file(s). This parameter cannot be used with parameter This parameter is:
|
|
No |
Indicates if the files or directory are overwritten if they exist. The default value is No. |
|
No |
Indicates if files are moved recursively when the directory contains other directories. The value No indicates that only files contained in the directory to move (not the subdirectories) are moved. The default value is Yes. |
|
No |
Indicates if file search is case-sensitive. By default, Oracle Data Integrator searches for files in uppercase (set to No). |
Examples
Rename the hosts
file to hosts.old
.
OdiFileMove -FILE=/etc/hosts -TOFILE=/etc/hosts.old
Move the file hosts
from the directory /etc
to the directory /home/odi
.
OdiFileMove -FILE=/etc/hosts -TOFILE=/home/odi/hosts
Move all files *.csv
from directory /etc
to directory /home/odi
with overwrite.
OdiFileMove -FILE=/etc/*.csv -TODIR=/home/odi -OVERWRITE=yes
Move all *.csv
files from directory /etc
to directory /home/odi
and change their extension to .txt
.
OdiFileMove -FILE=/etc/*.csv -TOFILE=/home/odi/*.txt -OVERWRITE=yes
Rename the directory C:\odi
to C:\odi_is_wonderful
.
OdiFileMove -DIR=C:\odi -TODIR=C:\odi_is_wonderful
Move the directory C:\odi
and its subfolders into the directory C:\Program Files\odi
.
OdiFileMove -DIR=C:\odi "-TODIR=C:\Program Files\odi" -RECURSE=yes
Use this command to manage file events. This command regularly scans a directory and waits for a number of files matching a mask to appear, until a given timeout is reached. When the specified files are found, an action on these files is triggered.
Usage
OdiFileWait -DIR=<directory> -PATTERN=<pattern> [-ACTION=<DELETE|COPY|MOVE|APPEND|ZIP|NONE>] [-TODIR=<target_directory>] [-TOFILE=<target_file>] [-OVERWRITE=<yes|no>] [-CASESENS=<yes|no>] [-FILECOUNT=<n>] [-TIMEOUT=<n>] [-POLLINT=<n>] [-HEADER=<n>] [-KEEP_FIRST_HEADER=<yes|no>] [-NOFILE_ERROR=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
No |
Action taken on the files found:
|
|
Yes |
Directory (or folder) to scan. The directory location is always relative to the data schema directory of its logical schema. |
|
Yes |
Mask of file names to scan. Use Examples:
|
|
No |
Target directory of the action. When the action is:
|
|
No |
Destination file(s). When the action is:
Renaming rules:
|
|
No |
Indicates if the destination file(s) will be overwritten if they exist. The default value is No. Note that if this option is used with |
|
No |
Indicates if file search is case-sensitive. By default, Oracle Data Integrator searches files in uppercase (set to No). |
|
No |
Maximum number of files to wait for (the default value is 0). If this number is reached, the command ends. The value 0 indicates that Oracle Data Integrator waits for all files until the timeout is reached. If this parameter is 0 and the timeout is also 0, this parameter is then forced implicitly to 1. |
|
No |
Maximum waiting time in milliseconds (the default value is 0). If this delay is reached, the command yields control to the following command and uses its value The value 0 is used to specify an infinite waiting time (wait until the maximum number of messages to read as specified in the parameter |
|
No |
Interval in milliseconds to search for new files. The default value is 1000 (1 second), which means that Oracle Data Integrator looks for new messages every second. Files written during the OdiFileWait are taken into account only after being closed (file size unchanged) during this interval. |
|
No |
This parameter is valid only for the Number of header lines to suppress from the files before concatenation. The default value is 0 (no processing). |
|
No |
This parameter is valid only for the Keeps the header lines of the first file during the concatenation. The default value is Yes. |
|
No |
Indicates the behavior if no file is found. The default value is No, which means that no error is generated if no file is found. |
Examples
Wait indefinitely for file flag.txt
in directory c:\events
and proceed when this file is detected.
OdiFileWait -ACTION=NONE -DIR=c:\events -PATTERN=flag.txt -FILECOUNT=1 -TIMEOUT=0 -POLLINT=1000
Wait indefinitely for file flag.txt
in directory c:\events
and suppress this file when it is detected.
OdiFileWait -ACTION=DELETE -DIR=c:\events -PATTERN=flag.txt -FILECOUNT=1 -TIMEOUT=0 -POLLINT=1000
Wait for the sales files *.dat
for 5 minutes and scan every second in directory c:\sales_in
, then concatenate into file sales.dat
in directory C:\sales_ok
. Keep the header of the first file.
OdiFileWait -ACTION=APPEND -DIR=c:\sales_in -PATTERN=*.dat TOFILE=c:\sales_ok\sales.dat -FILECOUNT=0 -TIMEOUT=350000 -POLLINT=1000 -HEADER=1 -KEEP_FIRST_HEADER=yes -OVERWRITE=yes
Wait for the sales files *.dat
for 5 minutes every second in directory c:\sales_in
, then copy these files into directory C:\sales_ok
. Do not overwrite.
OdiFileWait -ACTION=COPY -DIR=c:\sales_in -PATTERN=*.dat -TODIR=c:\sales_ok -FILECOUNT=0 -TIMEOUT=350000 -POLLINT=1000 -OVERWRITE=no
Wait for the sales files *.dat
for 5 minutes every second in directory c:\sales_in
and then archive these files into a ZIP file.
OdiFileWait -ACTION=ZIP -DIR=c:\sales_in -PATTERN=*.dat -TOFILE=c:\sales_ok\sales.zip -FILECOUNT=0 -TIMEOUT=350000 -POLLINT=1000 -OVERWRITE=yes
Wait for the sales files *.dat
for 5 minutes every second into directory c:\sales_in
, then move these files into directory C:\sales_ok
. Do not overwrite. Append .bak
to the file names.
OdiFileWait -ACTION=MOVE -DIR=c:\sales_in -PATTERN=*.dat -TODIR=c:\sales_ok -TOFILE=*.bak -FILECOUNT=0 -TIMEOUT=350000 -POLLINT=1000 -OVERWRITE=no
Use this command to use the FTP protocol to connect to a remote system and to perform standard FTP commands on the remote system. Trace from the script is recorded against the Execution Details of the task representing the OdiFtp step in Operator Navigator.
Usage
OdiFtp -HOST=<ftp server host name> -USER=<ftp user> [-PASSWORD=<ftp user password>] -REMOTE_DIR=<remote dir on ftp host> -LOCAL_DIR=<local dir> [-PASSIVE_MODE=<yes|no>] [-TIMEOUT=<time in seconds>] [-STOP_ON_FTP_ERROR=<yes|no>] -COMMAND=<command>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Host name of the FTP server. |
|
Yes |
User on the FTP server. |
|
No |
Password of the FTP user. |
|
Yes |
Directory path on the remote FTP host. |
|
Yes |
Directory path on the local machine. |
|
No |
If set to No, the FTP session uses Active Mode. The default value is Yes, which means the session runs in passive mode. |
|
No |
Time in seconds after which the socket connection times out. |
|
No |
If set to Yes (default), the step stops when an FTP error occurs instead of running to completion. |
|
Yes |
Raw FTP command to execute. For a multiline command, pass the whole command as raw text after the OdiFtp line without the Supported commands:
|
Examples
Execute a script on a remote host that makes a directory, changes directory into the directory, puts a file into the directory, and checks its size. The script appends another file, checks the new size, and then renames the file to dailyData.csv
. The -STOP_ON_FTP_ERROR
parameter is set to No
so that the script continues even if the directory exists.
OdiFtp -HOST=machine.oracle.com -USER=odiftpuser -PASSWORD=<password> -LOCAL_DIR=/tmp -REMOTE_DIR=c:\temp -PASSIVE_MODE=YES -STOP_ON_FTP_ERROR=No MKD dataDir CWD dataDir STOR customers.csv SIZE customers.csv APPE new_customers.csv customers.csv SIZE customers.csv RNFR customers.csv RNTO dailyData.csv
Use this command to download a file from an FTP server.
Usage
OdiFtpGet -HOST=<ftp server host name> -USER=<ftp user> [PASSWORD=<ftp user password>] -REMOTE_DIR=<remote dir on ftp host> [-REMOTE_FILE=<file name under the -REMOTE_DIR>] -LOCAL_DIR=<local dir> [-LOCAL_FILE=<file name under the –LOCAL_DIR>] [-PASSIVE_MODE=<yes|no>] [-TIMEOUT=<time in seconds>]
Note:
If a Local or Remote file name needs to have % as part of its name, %25 needs to be passed instead of just %.
%25 will resolve automatically to %.
For example, if file name needs to be temp%result
, it should be passed as REMOTE_FILE=temp%25result
or -LOCAL_FILE=temp%25result
.
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Host name of the FTP server. |
|
Yes |
User on the FTP server. |
|
No |
Password of the FTP user. |
|
Yes |
Directory path on the remote FTP host. |
|
No |
File name under the directory specified in the |
|
Yes |
Directory path on the local machine. |
|
No |
File name under the directory specified in the To filter the files to be copied, use Examples:
|
|
No |
If set to No, the FTP session uses Active Mode. The default value is Yes, which means the session runs in passive mode. |
|
No |
The time in seconds after which the socket connection times out. |
|
No |
The file located on a data server resolved based on the Logical Schema value. For example, the LSCHEMA may point to a Hadoop Data Server and the tool will access the file from that data server. |
Examples
Copy the remote directory /test_copy555
on the FTP server recursively to the local directory C:\temp\test_copy
.
OdiFtpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555
Copy all files matching the Sales*.txt
pattern under the remote directory /
on the FTP server to the local directory C:\temp\
using Active Mode for the FTP connection.
OdiFtpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales*.txt -REMOTE_DIR=/ -PASSIVE_MODE=NO
Use this command to upload a local file to an FTP server.
Usage
OdiFtpPut -HOST=<ftp server host name> -USER=<ftp user> [PASSWORD=<ftp user password>] -REMOTE_DIR=<remote dir on ftp host> [-REMOTE_FILE=<file name under the -REMOTE_DIR>] -LOCAL_DIR=<local dir> [-LOCAL_FILE=<file name under the –LOCAL_DIR>] [-PASSIVE_MODE=<yes|no>] [-TIMEOUT=<time in seconds>]
Note:
If a Local or Remote file name needs to have % as part of its name, %25 needs to be passed instead of just %.
%25 will resolve automatically to %.
For example, if file name needs to be temp%result
, it should be passed as REMOTE_FILE=temp%25result
or -LOCAL_FILE=temp%25result
.
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Host name of the FTP server. |
|
Yes |
User on the FTP server. |
|
No |
Password of the FTP user. |
|
Yes |
Directory path on the remote FTP host. |
|
No |
File name under the directory specified in the |
|
Yes |
Directory path on the local machine. |
|
No |
File name under the directory specified in the To filter the files to be copied, use * to specify the generic characters. Examples:
|
|
No |
If set to No, the FTP session uses Active Mode. The default value is Yes, which means the session runs in passive mode. |
|
No |
The time in seconds after which the socket connection times out. |
Note:
For OdiFtp execution to be successful, you must have LIST privilege in the user's home directory.
Examples
Copy the local directory C:\temp\test_copy
recursively to the remote directory /test_copy555
on the FTP server.
OdiFtpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555"
Copy all files matching the Sales*.txt
pattern under the local directory C:\temp\
to the remote directory /
on the FTP server.
OdiFtpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales*.txt -REMOTE_DIR=/
Copy the Sales1.txt
file under the local directory C:\temp\
to the remote directory /
on the FTP server as a Sample1.txt
file.
OdiFtpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/Sample1.txt
Use this command to generate a set of scenarios from design-time components (Packages, Mappings, Procedures, or Variables) contained in a folder or project, filtered by markers.
Usage
OdiGenerateAllScen -PROJECT=<project_id> [-FOLDER=<folder_id>] [-MODE=<REPLACE|CREATE>] [-GRPMARKER=<marker_group_code> [-MARKER=<marker_code>] [-MATERIALIZED=<yes|no>] [-GENERATE_MAP=<yes|no>] [-GENERATE_PACK=<yes|no>] [-GENERATE_POP=<yes|no>] [-GENERATE_TRT=<yes|no>] [-GENERATE_VAR=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
ID of the Project containing the components to generate scenarios for. |
|
No |
ID of the Folder containing the components to generate scenarios for. |
|
No |
Scenario generation mode:
|
|
No |
Group containing the marker used to filter the components for which scenarios must be generated. When |
|
No |
Marker used to filter the components for which scenarios must be generated. When |
|
No |
Specifies whether scenarios should be generated as if all underlying objects are materialized. The default value is No. |
|
No |
Specifies whether scenarios should be generated from the mapping. The default value is No. |
|
No |
Specifies whether scenarios attached to packages should be (re-)generated. The default value is Yes. |
|
No |
Specifies whether scenarios attached to mappings should be (re-)generated. The default value is No. |
|
No |
Specifies whether scenarios attached to procedures should be (re-)generated. The default value is No. |
|
No |
Specifies whether scenarios attached to variables should be (re-)generated. The default value is No. |
Examples
Generate all scenarios in the project whose ID is 1003
for the current repository.
OdiGenerateAllScen -PROJECT=1003
Use this command to import the contents of an export file into a repository. This command reproduces the behavior of the import feature available from the user interface.
Use caution when using this tool. It may work incorrectly when importing objects that depend on objects that do not exist in the repository. It is recommended that you use this API for importing high-level objects (projects, models, and so on).
WARNING:
The import type and the order in which objects are imported into a repository should be carefully specified. Refer to the chapter Exporting and Importing in Developing Integration Projects with Oracle Data Integrator for more information on import.
Usage
OdiImportObject -FILE_NAME=<FileName> [-WORK_REP_NAME=<workRepositoryName>] -IMPORT_MODE=<DUPLICATION|SYNONYM_INSERT|SYNONYM_UPDATE|SYNONYM_INSERT_UPDATE>] [-IMPORT_SCHEDULE=<yes|no>] [-EXPORT_KEY=<key>] [-UPGRADE_KEY=<upgradeKey>] [IMPORT_WITHOUT_CIPHER_DATA=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Name of the XML export file to import. |
|
No |
Name of the work repository into which the object must be imported. This work repository must be defined in the connected master repository. If this parameter is not specified, the object is imported into the current master or work repository. |
|
Yes |
Import mode for the object. The default value is |
|
No |
If the selected file is a scenario export, imports the schedules contained in the scenario export file. The default value is No. |
|
NoFoot 18 |
Specifies a cryptographic private key used to encrypt sensitive cipher data. You must specify this key when importing the exported object in order to import the cipher data. |
|
No |
Upgrade key to import repository objects from earlier versions of Oracle Data Integrator (pre-12c). |
|
NoFoot 19 |
When set to Yes, specifies that sensitive (cipher) values should be set to null in the object when it is imported. When set to No or when this parameter is omitted, you must include the |
Footnote 18
If the -EXPORT_KEY
parameter is not specified, the -IMPORT_WITHOUT_CIPHER_DATA
parameter must be specified, and must be set to Yes.
Footnote 19
If -EXPORT_WITHOUT_CIPHER_DATA
is not specified, or if it is specified and set to No, you must specify the -EXPORT_KEY
parameter with a valid key value.
Examples
Import the /temp/DW01.xml
export file (a project) into the WORKREP
work repository using DUPLICATION
mode.
OdiImportObject -FILE_NAME=/temp/DW01.xml -WORK_REP_NAME=WORKREP -IMPORT_MODE=DUPLICATION -EXPORT_KEY=examplekey1
Use this command to import a scenario into the current work repository from an export file.
Usage
OdiImportScen -FILE_NAME=<FileName> [-IMPORT_MODE=<DUPLICATION|SYNONYM_INSERT|SYNONYM_UPDATE|SYNONYM_INSERT_UPDATE>] [-EXPORT_KEY=<key>] [-IMPORT_SCHEDULE=<yes|no>] [-FOLDER=<parentFolderGlobalId>] [-UPGRADE_KEY=<upgradeKey>] [IMPORT_WITHOUT_CIPHER_DATA=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Name of the export file. |
|
No |
Import mode of the scenario. The default value is |
|
NoFoot 20 |
Specifies a cryptographic private key used to encrypt sensitive cipher data. You must specify this key when importing the exported object in order to import the cipher data. |
|
No |
Imports the schedules contained in the scenario export file. The default value is No. |
|
No |
Global ID of the parent scenario folder. |
|
No |
Upgrade key to import repository objects from earlier versions of Oracle Data Integrator (pre-12c). |
|
NoFoot 21 |
When set to Yes, specifies that sensitive (cipher) values should be set to null in the object when it is imported. When set to No or when this parameter is omitted, you must include the |
Footnote 20
If the -EXPORT_KEY
parameter is not specified, the -IMPORT_WITHOUT_CIPHER_DATA
parameter must be specified, and must be set to Yes.
Footnote 21
If -IMPORT_WITHOUT_CIPHER_DATA
is not specified, or if it is specified and set to No, you must specify the -EXPORT_KEY
parameter with a valid key value.
Examples
Import the /temp/load_dwh.xml
export file (a scenario) into the current work repository using DUPLICATION
mode.
OdiImportScen -FILE_NAME=/temp/load_dwh.xml -IMPORT_MODE=DUPLICATION -EXPORT_KEY=examplekey1
InvokeRESTfulService tool is used to invoke any REST service from ODI. The request sent to the service can be either provided in a request file, or directly provided in the tool command (<RequestBody>)
. The response of the RESTful service request is fed to a file that can be used in Oracle Data Integrator. The tool also supports multipart request and parameters are provided to specify multipart body part details.
The tool does not accept a REST URL directly. Instead, the tool accepts a REST data server and an operation to invoke, from which the REST URL is derived for the REST invocation. For details of REST operation and data server and how they are defined using ODI Studio UI, please refer to the REST data server and Studio UI documentation.
InvokeRESTfulService tool has two important features:
Paginated Invocation — There are certain REST services that restrict the maximum amount of data (or records) that can be retrieved in a single service invocation. Such services provide the response of the request in multiple pages and thus these responses may extend to subsequent pages. You can configure the tool, to invoke such services and fetch the complete response in a single invocation of the tool. The tool will internally make repeated invocations for each page and fetch a complete response by specifying some pagination related tool parameters.
Chunk Upload Support — There are certain REST services which imposes restriction on the amount of data that can be uploaded in a single invocation. These services have support for uploading data in chunks. This REST tool performs such chunk upload operation in a single tool invocation.
Based on the specified parameters, the tool identifies whether the invocation is a regular invocation, chunk upload invocation or pagination.
Usage
Use this command to invoke a Restful service from ODI.
OdiInvokeRESTfulService [-CONTEXT=<ODI_Context>] -LSCHEMA=<Logical_Schema> [-REQUEST_BODY_FILE=<Request_File> | -REQUEST_BODY=<RequestBody> | <RequestBody>] [- REQUEST_HDFS_LSCHEMA=<Request_HDFS_Logical_Schema>] [- CHUNK_SIZE=<Chunk_Size>] [-CHUNK_PREFIX=<Chunk_Prefix>][- PRE_OPERATION=<Pre_Operation>] [- PRE_OPERATION_BODY=<Pre_Operation_Body>] -OPERATION=<Rest Operation> [-POST_OPERATION=<Post_Operation>] [- POST_OPERATION_BODY=<Post_Operation_Body>] [-HEADER.<name>=<value>]* [-HEADER_IGNORE_DEFAULT=<YES|NO>] [- REQUEST_BODY_PART_NAME=<Body_Part_Name> & - REQUEST_BODY_PART_CONTENT_TYPE=<Body_Part_Content_Type> & - REQUEST_BODY_PART_VALUE=<Body Part Value>]* [[- REQUEST_QUERY.name=<value>]* | - ENCODED_REQUEST_QUERY_STRING=<Encoded_Request_Query_String>] [- REQUEST_QUERY_IGNORE_DEFAULT=<YES|NO>] [- REQUEST_TEMPLATE.<Variable_Name>=<Variable_Value>]* [- REQUEST_TEMPLATE_IGNORE_DEFAULT=<YES|NO>] [- RESPONSE_FILE=<Response_File>] [-RESPONSE_HDFS_LSCHEMA=<Response HDFS LSchema>][-RESPONSE_MODE=<NEW_FILE|FILE_APPEND>] [- RESPONSE_FILE_CHARSET=<javaCharset>][-TIMEOUT=<timeout>] [- RETRY_COUNT=<Retry_Count>] [-RETRY_INTERVAL=<Retry_Interval>] [- FAILURE_STATUS_CODES=<Failure Status codes>] [- TRACE_FILE=<Trace_File>] [-TRACE_FILE_MODE=<NEW_FILE|FILE_APPEND>] [- NEXT_REQUEST_RESOLVER=<Next_Request_Resolver>] [- TOTAL_COUNT_FIELD_RESOLVER=<Total_Count_Field_Resolver>] [- RESOLVER_OVERWRITE_CLASS=<Resolver_Overwrite_Class>] [- RESPONSE_DATA_CONTAINER=<Response_Data_Container>] [<RequestBody>]
Parameters
Parameter | Mandatory | Description |
---|---|---|
|
No |
ODI Context. If not specified, the execution will happen in the context of the calling session. |
|
Yes |
Logical schema configured for REST data source. |
|
No |
The name of the file containing the request body. The request body can be directly provided (on the next line after) the tool call |
|
No |
Request body can be given in this parameter. See REQUEST_BODY_FILE and <RequestBody> parameters for more details. |
|
No |
HDFS file configuration for the request file. This is applicable only if request body is a file which is in HDFS format. This configuration is applicable for multipart body contents that are files. |
|
No |
The input file specified through parameter is The tool will also maintain the following runtime templates associated with chunk depending upon which chunk is being considered:
odi.CHUNK_PATH odi.CHUNK_NAME odi.CHUNK_SIZE odi.CHUNK_INDEXThe presence of this parameter indicates that the tool invocation is for Chunk Upload. |
|
No |
If a directory path is specified in If The presence of this parameter indicates that the tool invocation is for Chunk Upload. |
|
No |
REST operation defined in the physical schema. If specified, this operation will be the first operation to get invoked and it will be invoked only once. This operation will not consider the following tool parameters: |
|
No |
This is used to specify the request body for the invocation of If runtime template
If |
|
Yes |
REST operation defined in the physical schema. This is the main operation and it is mandatory. If specified, this operation will get invoked after the operation specified by For Chunk Upload invocation, this operation will be called in a loop for each file chunk available. If the first chunk is uploaded by For example: - The total chunks created or identified is 10, When this operation is being invoked for i th chunk, then runtime templates in this operation will be replaced with following values.
|
|
No |
REST operation defined in the physical schema. If specified, this operation will be the last operation to get invoked and it will be invoked only once. This operation will not consider the tool parameters such as |
|
No |
This is used to specify the request body for the invocation of
If |
|
No |
The http headers to be used while invoking the REST service. Multiple headers can be passed like this during the tool invocation as given below : -HEADER.Content-Type=application/text –HEADER.Authorization=DAD9AFFA34D5== |
|
No |
This is applicable only when invoking the main operation specified by Default value is "NO". |
|
No |
This is used to specify the name of multipart content. This parameter is specified along with - |
|
No |
This can be used to specify the content type of the multipart content. This can be repeated for each content in the body part. A sample invocation is as below:
OdiInvokeRESTfulService "-CONTEXT=GLOBAL" "-LSCHEMA=LSchema" "-OPERATION=PostOperationWithMultipartBody""- HEADER.Content-Type=multipart/mixed" "-REQUEST_BODY_PART_NAME=empDataAsXml""- REQUEST_BODY_PART_NAME=empDataAsJson" "-REQUEST_BODY_PART_VALUE=/path/employee.xml" "- REQUEST_BODY_PART_VALUE=/path/employee.json" "-REQUEST_BODY_PART_CONTENT_TYPE=application/xml" "- REQUEST_BODY_PART_CONTENT_TYPE=application/json"Here multipart body has two body parts - an XML and a JSON |
|
No |
The presence of this parameter will be considered as the presence of multipart content in the request. This parameter will be used to specify the value of body part. It can be either a file path or a value. The value will be treated as text if the corresponding - |
|
No |
This parameter is used for passing query parameters which will be attached to the REST URL. Note: Query parameter values are in plain text format (i.e. not URI encoded).For Example :
These parameters will be ignored if |
|
No |
This specifies the alternate way of specifying request query parameters. Repeated
If this parameter is specified, all |
|
No |
This is only applicable when initializing the main operation specified by Default value is "NO". |
|
No |
These are regular templates provided by user. This will be used for substituting template variables (enclosed in curly braces) in the REST resource path, header parameter and query parameters (specified through For Example:
|
|
No |
This is only applicable when invoking the main operation specified by Default value is "NO". |
|
No |
This parameter specifies the name of the file to which the REST call response will be written into. If this parameter is not specified, then the response generated by REST call will be discarded, if any. |
|
No |
This parameter denotes the HDFS file configuration for the response file. This is applicable when response file is to be created in HDFS. |
|
No |
This is ignored if Default value is “NO”. |
|
No |
This parameter is applicable only if the REST service is expected to give a response which is character data. When this character data is written into a response file, this encoding will be used. If the REST response is expected to be a binary file, then this parameter must not be specified. |
|
No |
This parameter indicates the specified amount of waiting time (in milliseconds) for which the REST service invocation waits for, before considering that the server will not provide a response and an error is produced. If no value is given, it indicates infinite waiting time and there will be no time-out during invocation. |
|
No |
This parameter represents the number of times REST tool should attempt retry to access a REST service which is failing due to network related issues like time-out. Default value is 3. |
|
No |
This parameter specifies the time interval (in milliseconds) after which retry should be attempted. Default value is 10000. |
|
No |
This parameter specifies the comma separated values of the status codes which indicate failure. When REST invocation gives any of these status codes, the tool execution will be stopped and tool will raise an error. For Example — |
|
No |
All available response status and response header information will be written into this file for debugging purpose. |
|
No |
This parameter is ignored, if Default value is "NO". |
|
No |
This can contain an For Example —
JsonPath : - $.account.container[0].name Xpath - /account/container[1]/name/text() If the value is numeric (page size), then the tool will use it to increase the page offset. The starting value for page offset is 0. New page offset value will be set to the runtime template If the value is string, it will be treated as a response header key or a Link header relation. First the response header will be checked for the header key. If header key is not found in the response headers, it will be considered as Link header. Link header in the REST response will be parsed and using the relation value the next page link will be identified. Possible values for Link header relation are next, next page etc. Tool will not support if the Link header relation is numeric. Once the next page URL is identified, it will be set to runtime template For Example — GitHub REST API provides next page URL information in the Link header https://developer.github.com/guides/traversing-with-pagination/ If the parameter Note: Once the runtime templateodi.OPERATION_URL_OVERRIDE is set, if any operation is initialized, then this value will be used to override the resource path defined in the operation. For example: - Suppose after the PRE_OPERATION invocation, this runtime template is set, then OPERATION and POST_OPERATION will use this value as the resource path. The resource path defined in these operations will be ignored. |
|
No |
This parameter is used by pagination type invocation. If the total count of records is present in the paginated response, user needs to specify an |
|
No |
If the Note: These publicly visible classes are available in SDK library.public interface oracle.odi.runtime.rest.INextRequestResolver { public boolean process(oracle.odi.runtime.rest.INextRequestResolver restToolProvider , oracle.odi.runtime.rest.IOdiToolRestRequest request, java.io.InputStream response /*REST response input stream */, javax.ws.rs.core.MultivaluedMap<String,Object> responseHeaders /*REST response headers*/) throws Exception; }
This implementation should handle the termination condition for Pagination type invocation. This means for Pagination type invocation when the last page is reached, process () method should return false. For Pagination type invocation, REST tool will be continuously running until false is returned from process().
You can use these setter methods to modify the behavior of next request only if it is of the same operation. For Example — If process() is called, as a result of the invocation of |
|
No |
The value will be an |
|
No |
Body of the REST request. If |
Usage Recommendations
For usage recommendations of odiInvokeRESTfulService tool, see Usage Recommendations.
Examples
Listed below are the examples for OdiInvokeRESTfulService for Pagination and Chunk Upload functions:
General usage recommendations for odiInvokeRESTfulService tool are:
The tool helps to identify the invocation type based on the parameters provided. There are three types of invocations – Regular, Chunk Upload and Pagination. If CHUNK_SIZE
or CHUNK_PREFIX
parameters are present then it is Chunk Upload type invocation. If these parameters are absent and NEXT_REQUEST_RESOLVER
or RESOLVER_OVERWRITE_CLASS
parameter is present, then the invocation is of Pagination type. All other invocations are considered as regular invocations and it is recommended to use OPERATION parameter only for specifying REST operation.
You may pass the parameter indicating the page size (maxResults,count
etc) as query parameter for optimized retrieval. If not, the REST service will return the pages with default size. In both the cases these tool parameters can be used to make repetitive calls to get the complete response. If you are specifying the page size, then it should not exceed the maximum limit imposed by the REST service.
XPath
equivalent of JSON
is JsonPath
. For more details, see https://github.com/jayway/JsonPath
. The link http://jsonpath.herokuapp.com/
can be used to evaluate JSONPath expression. REST tool uses Jayway implementation.
The tool will relay on the resolver expressions (provided in NEXT_REQUEST_RESOLVER, TOTAL_COUNT_RESOLVER or RESPONSE_DATA_CONTAINER_RESOLVER
) to determine if the response content is JSON or XML. For example, suppose the expected response is a JSON, you are not expected to provide Xpath resolver expressions.
If an Xpath expression or JSONPath expression is given to the parameter NEXT_REQUEST_RESOLVER
, then these expressions should be resolved into a string value (indicates token, upload id or a URL). A JSON array containing a single item is also considered as valid resolved value for JSONPath expression.
For JSONPath, a valid resolved value may be depicted as – "ABCDEF", 12345, ["ABCDEF"], [12345], http://host:port/context/resourcepath/
etc.
For Xpath expression, a valid resolved value may be depicted as — ABCDEF, 12345, http://host:port/context/resourcepath/
etc
You can use the runtime templates while constructing the URL path, query and headers. The values of these templates will be determined at the runtime. All runtime templates will start with odi prefix in the template name. This will distinguish runtime templates with regular user defined templates. Runtime templates can also be used as the values of the parameters – PRE_OPERATION_BODY, POST_OPERATION_BODY and REQUEST_BODY_PART_VALUE.
Following are the runtime templates that are identified by this tool. The values of runtime templates are set once or multiple times by the tool for all REST invocations. You can specify a value for a runtime template, as the initial value, in the operation definition in a REST physical schema, just like a regular user defined template. But initialization of runtime template using the tool parameter REQUEST_TEMPLATE
, is not allowed.
odi.CHUNK_PREFIX
— This template represents the value of CHUNK_PREFIX
parameter and this value is set once chunk files are created.
odi.UPLOAD_ID
— This runtime template will contain the upload session id once this is resolved from the first invocation response (using NEXT_REQUEST_RESOLVER
). This is set in Chunk Upload invocation and is set only once.
You can provide you own implementation using RESOLVER_OVERWRITE_CLASS
parameter.
odi.PAGE_TOKEN
— This runtime template will contain the page token or page offset for next page request. This is used for Pagination invocation and a new value will be set after each page response and is processed using NEXT_REQUEST_RESOLVER
.
You can provide your own implementation using RESOLVER_OVERWRITE_CLASS
parameter.
odi.OPERATION_URL_OVERRIDE
– If NEXT_REQUEST_RESOLVER
is returning a URL, then that URL will be set to this runtime template. For Chunk upload type this is set only once. For Pagination type, this is set after each page response is processed. You can provide your own implementation using RESOLVER_OVERWRITE_CLASS
parameter.
For chunk upload type invocation, once the chunks files are identified or created, the following template is set:
odi.TOTAL_CHUNK_SIZE
— It is the total size of all chunk files combined.The following runtime templates are set when a particular chunk file is being chosen for upload:
odi.CHUNK_PATH
— Absolute path of chunk file in the temp directory
odi.CHUNK_NAME
— Name of the chunk
odi.CHUNK_SIZE
— Size of the chunk. Mostly this will be same for all chunks. But for last chunk it may differ.
odi.CHUNK_INDEX
— Index of the chunk being used starting from 0. If 1st index is being used, this will represent a value 0.
The following are the examples for Pagination function of OdiInvokeRESTfulService tool.
Twitter Followers API
Reference Link : https://dev.twitter.com/overview/api/cursoring
Let’s consider the following Twitter API for details. This API lists the id of the followers: https://api.twitter.com/1.1/followers/ids.json?screen_name=<your-screen-name>
Let’s suppose the default page size of this API is 15. The response of first invocation will be as below:
{ "ids": [ 2552855054, 4345418177, 3803100858, 56422577, 3326965752, 3075258528, 3302261082, 297834835, 2927402418, 56053134, 78849029, 70703605, 2850513554, 161289980, 548960923 ], "next_cursor": 1434098452051477000, "next_cursor_str": "1434098452051476935", "previous_cursor": 0, "previous_cursor_str": "0" }
Define an operation “getFollowers” with URL: https://api.twitter.com/1.1/followers/ids.json?screen_name=<your-screen-name>
In the second operation give a query parameter cursor={odi.PAGE_TOKEN}
PRE_OPERATION=getFollowers OPERATION=getFollowers REQUEST_QUERY.cursor={odi.PAGE_TOKEN} // This will be added to main operation only NEXT_REQUEST_RESOLVER=$.next_cursor_str RESPONSE_DATA_CONTAINER_RESOLVER=$.ids
The JSONPath expression $.next_cursor_str
will be resolved in to a value "1434098452051476935". This will be used as the value to the parameter cursor in the second REST invocation.
https://api.twitter.com/1.1/followers/ids.json?screen_name=<your-screen-name>&cursor=1434098452051476935
{ "ids": [ 548960923, 435520948, 338402626, 80845228 ], "next_cursor": 0, "next_cursor_str": "0", "previous_cursor": -1434098452051477000, "previous_cursor_str": "-1434098452051476935" }
In the response, the value of next cursor is 0. It indicates there are no more pages left and the tool can stop execution after writing the response. The parameter RESPONSE_DATA_CONTAINER_RESOLVER
will be used to fetch the required data from the response. The final response will be as follows:
[ 2552855054, 4345418177, 3803100858, 56422577, 3326965752, 3075258528, 3302261082, 297834835, 2927402418, 56053134, 78849029, 70703605, 2850513554, 161289980, 548960923, 548960923, 435520948, 338402626, 80845228 ]
Google Drive API
This is similar to Twitter REST API pagination which uses the page pointers in the response to get the value for page token query parameter for next request.
Let’s consider the following API URL: https://developers.google.com/apis-explorer/#s/drive/v2/drive.files.list?_h=1
Sample Response
{ "kind": "drive#fileList", "etag": "\"rCKCAyesbPCaBxGt0eDJcEBQNUI/HNdpkEyt-3gaIlW8i4TRzGJXk-w\"", "selfLink": "https://www.googleapis.com/drive/v2/files?maxResults=3", "nextPageToken": "EAIaqgELEgBSoQEKjwEKaPjz", "nextLink": "https://www.googleapis.com/drive/v2/files? maxResults=3&pageToken=EAIaqgELEgBSoQEKjwEKaPjz", "items": [ ... ] }
Define an operation “getFiles” with URL: https://developers.google.com/apis-explorer/#s/drive/v2/drive.files.list?_h=1
PRE_OPERATION=getFiles OPERATION=getFiles REQUEST_QUERY.pageToken={odi.PAGE_TOKEN} // This will be added to main operation only NEXT_REQUEST_RESOLVER=$.nextPageToken RESPONSE_DATA_CONTAINER_RESOLVER=$.items
JSONPath expression $.nextPageToken
will be resolved to a value EAIaqgELEgBSoQEKjwEKaPjz
Using the above parameters the REST tool will construct the URL for second invocation as below:
https://developers.google.com/apis-explorer/#s/drive/v2/drive.files.list?_h=1&pageToken=EAIaqgELEgBSoQEKjwEKaPjz
Since Google drive API also provides nextLink in the response which is the complete URL pointing to next page, there is an alternate way of accumulating the paginated result.
Set NEXT_REQUEST_RESOLVER
as below:
PRE_OPERATION=getFiles OPERATION=getFiles REQUEST_QUERY.pageToken={odi.PAGE_TOKEN} // This will be added to main operation only NEXT_REQUEST_RESOLVER=$.nextLink RESPONSE_DATA_CONTAINER_RESOLVER=$.items
Salesforce REST API (Chatter REST API)
This is also similar to Twitter REST API pagination except that the next page pointer in the REST response will be only an actual URL. But in Google drive API and Twitter API, the next page pointer was a token which would be passed as the value of the query parameter <PageTokenParam> in the second request and so on.
Reference Link: http://help.salesforce.com/HTViewSolution?id=000175552&language=en_US
Salesforce Chatter API URL suffix will similar to — /services/data/v25.0/chatter/feeds/news/me/feed-items?pageSize=25
Here pageSize
in the query parameter indicating page size. Providing pageSize
is optional. The response will contain a field nextPageUrl.
The value of the will be the complete REST URL to get the next page of the result.
LinkedIn Get Company Updates (Offset Based Pagination)
Pagination support in Linked REST APIs is different when compared to above mentioned REST APIs.
Reference Link : https://developer.linkedin.com/docs/rest-api
Let’s consider the following REST API — https://api.linkedin.com/v1/companies/1337/updates
Define an operation getCompanyUpdates
for the above URL.
PRE_OPERATION=getCompanyUpdates OPERATION=getCompanyUpdates NEXT_REQUEST_RESOLVER=10 //Page size. Specified in the below URL using count parameter RESPONSE_DATA_CONTAINER_RESOLVER=$.values TOTAL_COUNT_FIELD_RESOLVER=$._total REQUEST_QUERY.start={odi.PAGE_TOKEN} // Will be used by main operation only
The first REST invocation will happen with below URL provided user is adding the query parameter start into the query string either using REQUEST_QUERY
or ENCODED_REQUEST_QUERY_STRING
. Since first invocation is of PRE_OPERATION
, it will not consider REQUEST_QUERY
parameter specified in the tool: https://api.linkedin.com/v1/companies/1337/updates?start=0&count=10&format=json
Response for first invocation
{ "_count": 10, "_start": 0, "_total": 26, "values": [ … ] }
When each response is processed, tool will increment the value of odi.PAGE_TOKEN
runtime templates by the numeric value specified in NEXT_REQUEST_RESOLVER
starting from 0 and the new value for start is 0+10=10. This is less than total record count =26 (extracted using JSONPath expression specified in TOTAL_RECORDS
). So there is a next page.
The second REST invocation URL will be — https://api.linkedin.com/v1/companies/1337/updates?start=10&count=10&format=json
Response for second invocation
{ "_count": 10, "_start": 10, "_total": 28, "values": [ … ] }
New value for start = 10+10=20. This is less than total record count 28 (Earlier it was 26. Two more records are added in the REST service end). So there is a next page.
The third REST invocation URL will be — https://api.linkedin.com/v1/companies/1337/updates?start=20&count=10&format=json
Response for third invocation
{ "_count": 10, "_start": 20, "_total": 24, "values": [ … ] }
New value for start is 20+10=30. This is greater than total record count 24 (It was 28 in last request, but changed to 24 because 4 records are deleted from REST service end in the mean time). So this is the last page.
Total records value will be extracted from each response and will be used to determine the last page.
Note:
If you are not specifying the page size using count query parameter, the REST service will return pages with default page size. In such case, you are expected to provide this default page size as the value for NEXT_REQUEST_RESOLVER
parameter because the tool will not be able to identify the default page size for a service.
The tool solves this pagination using an internal resolver. For this resolver to be used you have to provide values for NEXT_REQUEST_RESOLVER
and TOTAL_RECORDS
and the value given for NEXT_REQUEST_RESOLVER
should be numeric (it indicates page size).
RESOLVER_OVERWRITE_CLASS
REST tool Parameters
RESOLVER_OVERWRITE_CLASS=myCom.LinkedInPaginationResolver public class OffsetBasedPaginationResolver implements INextRequestResolver { private int mStartIndex = 0; // Default value of the start index public OffsetBasedPaginationResolver(){ } public OffsetBasedPaginationResolver(int startIndex){ mStartIndex = startIndex; } @Override public boolean process(IRestToolProvider restToolProvider, IOdiToolRestRequest request, InputStream responseData, MultivaluedMap<String,Object> responseHeaders) throws Exception { int pageSize = Integer.parseInt(restToolProvider.getToolParameterValue(IRestToolProvider.PARAM_NEXT_REQUEST_RESOLVER).toString()); String totalCountResolverExpression = restToolProvider.getToolParameterValue(IRestToolProvider.PARAM_TOTAL_COUNT_FIELD_RESOLVER).toString(); long totalRecords = InvokeRESTfulServiceSupport.getPathLongValue(responseData, totalCountResolverExpression); mStartIndex+= pageSize; if(mStartIndex <= totalRecords){ restToolProvider.getRuntimeTemplates().put(IRestToolProvider.RUNTIME_TEMPLATE_PAGE_TOKEN, mStartIndex); return true; } return false; } }
Oracle Storage Cloud Service API – List Containers
Pagination support in Oracle Storage Cloud Service REST APIs is different when compared to above mentioned REST APIs, considering how the next page token is extracted from the response.
Reference Link : http://docs.oracle.com/cloud/latest/storagecs_common/SSAPI/op-v1-%7Baccount%7D-get.html#request
Let’s consider the following REST API. Here the maximum limit is MAX_LIMIT=10000 (This is both default and maximum value). Let’s assume the default page size or MAX_LIMIT is 5.
First invocation URL is — https://<your_domain>.storage.oraclecloud.com/v1/<your_account>?format=json
Define an operation listContainers
with above URL:
PRE_OPERATION=listContainers OPERATION=listContainers REQUEST_QUERY.marker={odi.PAGE_TOKEN} NEXT_REQUEST_RESOLVER =$[-1:].name //This will fetch the name field from last record. To fetch the name field from first record use JSONPath expression “$[:1]”
Sample response for first invocation
[{"name":"container-10","count":0,"bytes":0,"accountId":{"id":15935},"deleteTimestamp":0.0,"containerId":{"id":223537}}, {"name":"container1","count":0,"bytes":0,"accountId":{"id":15935},"deleteTimestamp":0.0,"containerId":{"id":223427}}, {"name":"container10","count":0,"bytes":0,"accountId":{"id":15935},"deleteTimestamp":1.45984151022644E9,"containerId":{"id":223433}}, {"name":"container2","count":0,"bytes":0,"accountId":{"id":15935},"deleteTimestamp":0.0,"containerId":{"id":223379}}, {"name":"container3","count":0,"bytes":0,"accountId":{"id":15935},"deleteTimestamp":0.0,"containerId":{"id":223380}}]
Next page resolver will be resolved to a JSON array.
[“container3”]
REST tool will extract the value for container3
from this JSON array.
New REST URL for second page will be — https://<your_domain>.storage.oraclecloud.com/v1/<your_account>?marker=container3
Resolved value for JSONPath expression $..name
on the response data
[ "container-10", "container1", "container10", "container2", "container3"]
Now let’s consider the scenario where response format is XML — https://<your_domain>.storage.oraclecloud.com/v1/<your_account>?format=xml
Define an operation listContainersXml
with above URL.
PRE_OPERATION=listContainersXml OPERATION=listContainersXml REQUEST_QUERY.marker={odi.PAGE_TOKEN} NEXT_REQUEST_RESOLVER=/account/container[last()]/name/text() //This will fetch the name field from last record. To fetch the name field from first record use Xpath expression “/account/container[1]/name/text()” RESPONSE_DATA_CONTAINER_RESOLVER=/account //This will be root element in the accumulated response
Sample response of first invocation
<?xml version="1.0" encoding="UTF-8"?> <account><container><accountId><id>15935</id></accountId><bytes>0</bytes><containerId><id>223537</id></containerId><count>0</count><deleteTimestamp>0.0</deleteTimestamp><name>container-10</name></container> <container><accountId><id>15935</id></accountId><bytes>0</bytes><containerId><id>223427</id></containerId><count>0</count><deleteTimestamp>0.0</deleteTimestamp><name>container1</name></container> <container><accountId><id>15935</id></accountId><bytes>0</bytes><containerId><id>223433</id></containerId><count>0</count><deleteTimestamp>1.45984154E9</deleteTimestamp><name>container10</name></container> <container><accountId><id>15935</id></accountId><bytes>0</bytes><containerId><id>223379</id></containerId><count>0</count><deleteTimestamp>0.0</deleteTimestamp><name>container2</name></container> <container><accountId><id>15935</id></accountId><bytes>0</bytes><containerId><id>223380</id></containerId><count>0</count><deleteTimestamp>0.0</deleteTimestamp><name>container3</name></container></account>
Next page resolver will be resolved to a value container3,
which will be used as the next value of marker
query parameter.
Oracle Storage Cloud Service API – Get Object Content (Download object)
This API supports retrieval of objects stored in the containers. Object can be retrieved in chunks using Range header. This API does not have a max limit for the number of bytes that can be downloaded. But using the pagination support in the REST tool, you can download this object as chunks.
If you want to download an object named mydata
which is stored in container container1
, the first invocation URL is: https://<your_domain>.storage.oraclecloud.com/v1/<your_account>/container1/mydata
You need to set the initial Range header using tool parameter HEADER.
HEADER.Range=bytes=1-99 RESOLVER_OVERWRITE_CLASS=com.StorageCSPaginationResolver
You need to provide implementation to parse the response header Content-Range to get the current range and the total byte size. A sample value will be “bytes 1-99/2677”. 1-99 is the range and 2677 is the total bytes available. Response of process() in StorageCSPaginationResolver
should be a string value "bytes=100-199". This value will be passed as the new Range header value in the second REST call. You should also handle termination condition in their implementation. When a new range is calculated, if the lower range is less than the total bytes value, it indicates that no more pages are left. The process() should return with a false value, as the method process()'s return type is a boolean.
GitHub API
This API is an example for Link header based pagination support.
First REST invocation will happen with the URL — https://api.github.com/search/code?q=addClass+user:mozilla&page=1
NEXT_REQUEST_RESOLVER=next // It is a link header relation RESPONSE_DATA_CONTAINER_RESOLVER=$.items
https://api.github.com/search/code?q=addClass+user%3Amozilla&page=2; rel="next", https://api.github.com/search/code?q=addClass+user%3Amozilla&page=34; rel="last"
The REST tool will parse this link header and extract the link that corresponds to the relation "next".
The URL for next invocation will be — https://api.github.com/search/code?q=addClass+user%3Amozilla&page=2
This will keep on going until the link header value in the response does not has a relation represented by the relation " next".
This can also be achieved by you own implementation of resolver class as described below:
NEXT_REQUEST_RESOLVER=next RESOLVER_OVERWRITE_CLASS=com.GitHubPaginationResolver RESPONSE_DATA_CONTAINER_RESOLVER=$.items
Twitter timeline API
Note:
Pagination support in Twitter Tweets API is the same as the above Oracle Storage Cloud Service API .Consider the REST service initial URL — https://api.twitter.com/1.1/search/tweets.json?q=<screen_name>
Define an operation "getTweets "using above URL.
PRE_OPERATION=getTweets OPERATION=getTweets REQUEST_QUERY.since_id={odi.PAGE_TOKEN} NEXT_REQUEST_RESOLVER=$[:1].id_str //This will give the id_str from the first record RESPONSE_DATA_CONTAINER_RESOLVER=$.statuses
Second invocation URL will be — https://api.twitter.com/1.1/search/tweets.json?q=<screen_name>&since_id=<resolved_next_page_value>
Twitter Media Upload
Reference link: https://dev.twitter.com/rest/reference/post/media/upload-init
It has three types of operations. They are:
POST media/upload (INIT) – This request will start an upload session
Parameters:
command=INIT total_bytes=total file size
POST media/upload (APPEND) - multipart/form-data format
Parameters:
command=APPEND media_id= the media_id returned from the INIT command media= The raw binary file content being uploaded. <=5MB // media is multipart body part field name. segment_index= An ordered index of file chunk.It must be between 0-999 inclusive. The first segment has index 0, second segment has index 1, and so on.
POST media/upload (FINALIZE)
Parameters:
command=FINALIZE media_id= the media_id returned from the INIT command
For Example — Let us assume you have to upload 1 GB (1073741824 bytes) media.
You need to create 2 rest operations in the physical schema. Define the below operations:
Initialize(POST) = https://upload.twitter.com/1.1/media/upload.json?command=INIT&total_bytes=1073741824
Upload (POST)=https://upload.twitter.com/1.1/media/upload.json?command=APPEND&media_id={odi.UPLOAD_ID}&segment_index={odi.CHUNK_INDEX}
Finalize(POST)=https://upload.twitter.com/1.1/media/upload.json?command=FINALIZE&media_id={odi.UPLOAD_ID}
The response of first invocation can be:
{ "media_id": 710511363345354753, "media_id_string": "710511363345354753", "size": 11065, "expires_after_secs": 86400, }
CONTEXT=<Context> LSCHEMA=<Logical Schema> INPUT_FILE=<input file path> CHUNK_SIZE=5000000 (Nearly 5MB).This will break input file into 215 chunks REQUEST_BODY_PART_NAME=media REQUEST_BODY_PART_VALUE={odi.CHUNK_PATH} REQUEST_BODY_PART_CONTENT_TYPE=application/json PRE_OPERATION=Initialize NEXT_REQUEST_RESOLVER=$.media_id OPERATION=Upload //This operation uploads all chunks. POST_OPERATION=Finalize TRACE_FILE=<Trace File Name>
Google Drive Upload Files API
Reference link : https://developers.google.com/drive/v3/web/manage-uploads#resumable
It has two types of operations. They are:
POST files?uploadType=resumable (initial request)
Use the following HTTP headers with the initial request:
X-Upload-Content-Type.
Set to the media MIME type of the upload data to be transferred in subsequent requests.
X-Upload-Content-Length.
Set to the number of bytes of upload data to be transferred in subsequent requests. If the length is unknown at the time of this request, you can omit this header.
If providing metadata: Content-Type.
Set according to the metadata's data type.
Content-Length.
Set to the number of bytes provided in the body of this initial request. Not required if you are using chunk transfer encoding.
The response will contain a response header location which contains the session_uri
.
PUT session_uri (uploading chunks)
Content-Length header must be set when a chunk is being uploaded. Suppose you need to upload 1 GB (1073741824 bytes) of media . You need to create 2 rest operations in the physical schema.
Define below operations:
InitialRequest (POST)= https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable
Set the necessary headers such as X-Upload-Content-Type, X-Upload-Content-Length
and so on, while defining the operation in the physical schema.
Upload (PUT)=https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable (This url will be ignored and the upload
URL obtained in the first operation response will be used. Set the Content-Type header
. Set Content-Length={odi.CHUNK_SIZE}
The response of first operation can be:
HTTP/1.1 200 OK Location: https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable&upload_id=xa298sd_sdlkj2 Content-Length: 0
CONTEXT=<Context> LSCHEMA=<Logical Schema> INPUT_FILE=<input file path> CHUNK_SIZE=5000000 (Nearly 5MB).This will break input file into 215 chunks PRE_OPERATION=InitialRequest NEXT_REQUEST_RESOLVER=Location (Location is expected to be a response header) OPERATION=Upload TRACE_FILE=<Trace File Name>
COMMVAULT API (Upload a File in Chunks)
Reference link: https://documentation.commvault.com/commvault/v10/article?p=features/rest_api/operations/post_contentstore_share_chunk_upload.htm
It has two types of operations:
POST upload?uploadType=chunkedFile (initial request)
Set all necessary headers in the operation. The response will be an element DM2ContentIndexing_UploadFileResp
which contains the upload id. This will be the session id for upload requests.
POST upload?uploadType=chunkedFile
For Example — If you have to upload 1 GB (1073741824 bytes) media, you need to create three Rest operations in the physical schema.
Define the below operations:
InitUpload (POST)= SearchSvc/CVWebService.svc/contentstore/share/{shareId}/file/action/upload?uploadType=chunkedFile
Set the necessary headers such as FileName, FileSize, ParentFolderPath
and so on while defining the operation in the physical schema.
Upload (POST)= SearchSvc/CVWebService.svc/contentstore/share/{shareId}/file/action/upload?uploadType=chunkedFile&requestId={odi.UPLOAD_ID}
Set all necessary headers for the upload. Ignore FileEOF
header . This operation will be used to upload all chunks except the last one.
EndUpload (POST)= SearchSvc/CVWebService.svc/contentstore/share/{shareId}/file/action/upload?uploadType=chunkedFile&requestId={odi.UPLOAD_ID}
Set header, FileEOF=1
The response of first operation may look, as mentioned below. The field requestId
contains the upload id.
<DM2ContentIndexing_UploadFileResp requestId="13213022088234198160108125214183230586134182" chunkOffset="780830" errorCode="409" />
CONTEXT=<Context> LSCHEMA=<Logical Schema> INPUT_FILE=<input file path> CHUNK_SIZE=5000000 (Nearly 5MB).This will break input file into 215 chunks PRE_OPERATION=InitUpload PRE_OPERATION_BODY={odi.CHUNK_PATH} //Sets the first chunk as body of this operation NEXT_REQUEST_RESOLVER=string(/DM2ContentIndexing_UploadFileResp/@requestId) //Xpath expression to get the requestId OPERATION=Upload POST_OPERATION=EndUpload PRE_OPERATION_BODY={odi.CHUNK_PATH} //Sets the last chunk as body of this operation TRACE_FILE=<Trace File Name>
Oracle Storage Cloud Service
Reference link: https://docs.oracle.com/cloud/latest/storagecs_common/CSSTO/GUID-CA3E7F7B-4B33-4C18-8CEB-652813D9ADFB.htm
It has two types of operations:
PUT accountURL/containerName/{odi.CHUNK_NAME} /
PUT accountURL/containerName/manifestFile
For Example — If you need to upload 1 GB (1073741824 bytes) media. You need to create two Rest operations in the physical schema.
Define below operations
Upload (POST)=<accountURL>/{containerName}/{odi.CHUNK_NAME}
UploadManifest (POST)=<accountURL>/{containerName}/manifestFile
This operation will be used to upload the 0 byte manifest object.
Set the following headers for this operation :
X-Object-Manifest= {containerName}/{odi.CHUNK_PREFIX} Content-Length=0
CONTEXT=<Context> LSCHEMA=<Logical Schema> INPUT_FILE=<input file path> CHUNK_SIZE=5000000 (Nearly 5MB).This will break input file into 215 chunks OPERATION=Upload END_OPERATION=UploadManifest TRACE_FILE=<Trace File Name>
Note:
This tool replaces the OdiExecuteWebService tool.
Use this command to invoke a web service over HTTP/HTTPS and write the response to an XML file.
This tool invokes a specific operation on a port of a web service whose description file (WSDL) URL is provided.
If the LOGICAL_SCHEMA
parameter is specified, this tool will use the configuration from topology objects. The syntax for the existing mode will be supported for backward compatibility.
If this operation requires a web service request, it is provided either in a request file, or directly written out in the tool call (<XML Request>
). This request file can have two different formats (XML
, which corresponds to the XML body only, or SOAP
, which corresponds to the full-formed SOAP envelope including a SOAP header and body) specified in the -RESPONSE_FILE_FORMAT
parameter. The response of the web service request is written to an XML file that can be processed afterwards in Oracle Data Integrator. If the web service operation is one-way and does not return any response, no response file is generated.
Note:
This tool cannot be executed in a command line with startcmd
.
Usage
Syntax for new topology
OdiInvokeWebService -LOGICAL_SCHEMA=<WS Logical Schema> -OPERATION=<operation> -CONTEXT=<ODI Context> (Optional) [<XML Request>][-REQUEST_FILE=<xml_request_file>] [-RESPONSE_MODE=<NO_FILE|NEW_FILE|FILE_APPEND>] [-RESPONSE_FILE=<xml_response_file>] [-RESPONSE_XML_ENCODING=<charset>] [-RESPONSE_FILE_CHARSET=<charset>] [-RESPONSE_FILE_FORMAT=<XML|SOAP>] [-TIMEOUT=<timeout>]
Syntax for existing mode
OdiInvokeWebService -URL=<url> -PORT=<port> -OPERATION=<operation> [<XML Request>] [-REQUEST_FILE=<xml_request_file>] [-RESPONSE_MODE=<NO_FILE|NEW_FILE|FILE_APPEND>] [-RESPONSE_FILE=<xml_response_file>] [-RESPONSE_XML_ENCODING=<charset>] [-RESPONSE_FILE_CHARSET=<charset>] [-RESPONSE_FILE_FORMAT=<XML|SOAP>] [-HTTP_USER=<user>] [-HTTP_PASS=<password>] [-TIMEOUT=<timeout>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
No |
Logical schema containing the journalized tables (optional parameter). If LSCHEMA is specified, then OdiInvokeWebService will use URL, PORT, HTTP_USER, and HTTP_PASS configured at mapped SOAP WS Physical Schema and/or SOAP WS Data Server. |
|
No |
Context in which the logical schema will be resolved. If no context is specified, the execution context is used (optional parameter). |
|
No |
URL of the Web Service Description File (WSDL) describing the web service. |
|
No |
Name of the WSDL port type to invoke. |
|
Yes |
Name of the web service operation to invoke. |
|
No |
Request message in SOAP (Simple Object Access Protocol) format. This message should be provided on the line immediately following the OdiInvokeWebService call. The request can alternately be passed through a file whose location is provided with the |
|
No |
Location of the XML file containing the request message in SOAP format. The request can alternately be directly written out in the tool call ( |
|
No |
Generation mode for the response file. This parameter takes the following values:
|
|
Depends |
The name of the result file to write. Mandatory if |
|
Depends |
Response file character encoding. See the following table. Mandatory if |
|
Depends |
Character encoding that will be indicated in the XML declaration header of the response file. See the following table. Mandatory if |
|
No |
Format of the request and response file.
|
|
No |
User account authenticating on the HTTP server. |
|
No |
Password of the HTTP user. Note: When using an ODI variable as the password, the variable content must be encrypted using the encode script. |
|
No |
The web service request waits for a reply for this amount of time before considering that the server will not provide a response and an error is produced. The default value is 15 seconds. |
The following table lists the most common XML/Java character encoding schemes. For a more complete list, see:
http://java.sun.com/j2se/1.4.2/docs/guide/intl/encoding.doc.html
XML Charset | Java Charset |
---|---|
US-ASCII |
ASCII |
UTF-8 |
UTF8 |
UTF-16 |
UTF-16 |
ISO-8859-1 |
ISO8859_1 |
Examples
The following web service call returns the capital city for a given country (the ISO country code is sent in the request). Note that the request and response format, as well as the port and operations available, are defined in the WSDL passed in the URL parameter.
OdiInvokeWebService - -URL=http://www.oorsprong.org/websamples.countryinfo/CountryInfoService.wso ?WSDL -PORT_TYPE=CountryInfoServiceSoapType -OPERATION=CapitalCity -RESPONSE_MODE=NEW_FILE -RESPONSE_XML_ENCODING=ISO-8859-1 "-RESPONSE_FILE=/temp/result.xml" -RESPONSE_FILE_CHARSET=ISO8859_1 -RESPONSE_FILE_FORMAT=XML <CapitalCityRequest> <sCountryISOCode>US</sCountryISOCode> </CapitalCityRequest>
The generated /temp/result.xml
file contains the following:
<CapitalCityResponse> <m:CapitalCityResponse> <m:CapitalCityResult>Washington</m:CapitalCityResult> </m:CapitalCityResponse> </CapitalCityResponse>
Packages
Oracle Data Integrator provides a special graphical interface for calling OdiInvokeWebService in packages. See the chapter Using Web Services in Developing Integration Projects with Oracle Data Integrator for more information.
Use this command to stop a standalone agent.
Java EE Agents deployed in an application server cannot be stopped using this tool and must be stopped using the application server utilities.
Usage
OdiKillAgent (-PORT=<TCP/IP Port>|-NAME=<physical_agent_name>) [-IMMEDIATE=<yes|no>] [-MAX_WAIT=<timeout>]
Parameter
Parameters | Mandatory | Description |
---|---|---|
|
No |
If this parameter is specified, the agent running on the local machine with the specified port is stopped. |
|
Yes |
If this parameter is specified, the physical agent whose name is provided is stopped. This agent may be a local or remote agent. It must be declared in the master repository. |
|
No |
If this parameter is set to Yes, the agent is stopped without waiting for its running sessions to complete. If this parameter is set to No, the agent is stopped after its running sessions reach completion or after the |
|
No |
This parameter can be used when |
Examples
Stop the ODI_AGT_001
physical agent immediately.
OdiKillAgent -NAME=ODI_AGT_001 -IMMEDIATE=yes
Use this command to start and stop Oracle GoldenGate processes.
The -NB_PROCESS
parameter specifies the number of processes on which to perform the operation and applies only to Oracle GoldenGate Delivery processes.
If -NB_PROCESS
is not specified, the name of the physical process is derived from the logical process. For example, if logical schema R1_LS
maps to physical process R1
, an Oracle GoldenGate process named R1
is started or stopped.
If -NB_PROCESS
is specified with a positive value, sequence numbers are appended to the process and all processes are started or stopped with the new name. For example, if the value is set to 3
, and logical schema R2_LS
maps to physical process R2
, processes R21
, R22
and R23
are started or stopped.
If Start Journal is used to start the CDC (Changed Data Capture) process with Oracle GoldenGate JKMs (Journalizing Knowledge Modules), Oracle Data Integrator generates the Oracle GoldenGate Delivery process with the additional sequence number in the process name. For example, if Delivery process RP
is used for the Start Journal action, Start Journal generates an Oracle GoldenGate Delivery process named RP1
. To stop and start the process using the OdiManageOggProcess tool, set -NB_PROCESS
to 1
. The maximum value of -NB_PROCESS
is the value of the -NB_APPLY_PROCESS
parameter of the JKM within the model.
Usage
OdiManageOggProcess -OPERATION=<start|stop> -PROCESS_LSCHEMA=<OGG logical schema> [-NB_PROCESS=<number of processes>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Operation to perform on the process. |
|
Yes |
Logical schema of the process. |
|
No |
Number of processes on which to perform the operation. |
Examples
Start Oracle GoldenGate process R1
, which maps to logical schema R1_LS
.
OdiManageOggProcess "-OPERATION=START" "-PROCESS_LSCHEMA=R1_LS
Use this command to create a directory structure.
If the parent directory does not exist, this command recursively creates the parent directories.
Usage
OdiMkDir -DIR=<directory>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Directory (or folder) to create. |
|
No |
Indicates if the target is HDFS |
Examples
Create the directory odi
in C:\temp
. If C:\temp
does not exist, it is created.
OdiMkDir "-DIR=C:\temp\odi"
Use this command to integrate the Oracle GoldenGate based CDC mechanism by performing specific tasks at runtime to interact with the GoldenGate process.
Usage
OdiOggCommand -OPERATION="ADDEXTRACT" -LSCHEMA="%EXTRACT_LSCHEMA%" OdiOggCommand -OPERATION="ADDREPLICAT" -LSCHEMA="%REPLICAT_LSCHEMA%" OdiOggCommand -OPERATION="DROPPROCESS" -LSCHEMA="%PROCESS_LSCHEMA%" OdiOggCommand -OPERATION="STARTPROCESS" –LSCHEMA="%PROCESS_LSCHEMA%" OdiOggCommand -OPERATION="STOPPROCESS" –LSCHEMA="%PROCESS_LSCHEMA%" OdiOggCommand -OPERATION="SAVEPARAM" –LSCHEMA="%PROCESS_LSCHEMA%" -FILEPATH="%TMP/PRMFILE%" OdiOggCommand -OPERATION="ADDPUMP" -LSCHEMA="%EXTRACT_LSCHEMA%" -NAME="%PUMPNAME%" OdiOggCommand -OPERATION="ADDCHECKPOINTTABLE" -LSCHEMA="%REPLICAT_LSCHEMA%" -TABLE="TABLE_NAME" OdiOggCommand -OPERATION="DEFGEN” -LSCHEMA="%EXTRACT_LSCHEMA%" –TGT_LSCHEMA="%REPLICAT_LSCHEMA%" OdiOggCommand -OPERATION="ADDTRANDATA" –LSCHEMA="%EXTRACT_LSCHEMA%" –TABLE_NAME="%TABLENAME%" –COLLIST="%[col1,col2]%" OdiOggCommand -OPERATION="DBLOGIN" –LSCHEMA="%PROCESS_LSCHEMA%" -MODEL_LSCHEMA_NAME="%EXTR_MODEL_DB_LSCHEMA%" OdiOggCommand -OPERATION="DBLOGIN" –LSCHEMA="%PROCESS_LSCHEMA%" –PSCHEMA_NAME="%REPLICAT_TGT_PSCHEMA%"
Operations
Operation | Description | Required Parameter | Supported Custom Parameters | Remarks |
---|---|---|---|---|
|
Adds the Extract process to GoldenGate through JAgent. |
|
|
Retrieves the Extract and JAgent host details. |
|
Adds the Replicat process to GoldenGate through JAgent. |
|
N/A |
Retrieves the Replicat and JAgent host details. |
|
Deletes the process associated with the logical schema. |
|
N/A |
Retrieves the host details based on the type of the process and logical schema. |
|
Starts the process associated with the logical schema. |
|
N/A |
Retrieves the host details based on the type of the process and logical schema. |
|
Stops the process associated with the logical schema. |
|
N/A |
Retrieves the host details based on the type of the process and logical schema. |
|
Uploads the param file. |
|
|
Saves the param file associated to the process in the JAgent host associated with the logical schema. |
|
Adds a pump process in the JAgent host associated with the Extract process. |
|
|
The name of the pump is considered to be REPLICAT_NAME#P to associate a pump process to the Replicat process. |
|
Adds the checkpoint table. |
|
|
The table name is obtained from the JKM option. |
|
Loads and runs defgen. |
|
|
The defgen run on the Extract source will be saved to the target Replicat host. |
|
Enables tran data in Extract source. |
|
|
This has to be run for the Extract JAgent host. The details of the tables and columns have to be provided |
|
Database login to enable GoldenGate operations. |
|
|
The user name and password required to log in to the database can be retrieved from journalized model logical schema and current context. |
|
Database login to enable GoldenGate operations. |
|
|
The user name and password required to log in to the database can be retrieved from model physical schema assigned to Replicat process and current context. |
Examples
Add the Extract process to GoldenGate.
OdiOggCommand -OPERATION="ADDEXTRACT" -LSCHEMA="<%=odiRef.getOggProcessLschemaName("EXTRACT")%>"
Log in to the database by retrieving the user name and password from the physical schema assigned to the Replicat process.
OdiOggCommand -OPERATION="DBLOGIN" –LSCHEMA="<%=odiRef.getOggProcessLschemaName("REPLICAT")%>" -PSCHEMA_NAME="<%=odiRef.getOggProcessInfo("<%=odiRef.getOggProcessLschemaName("REPLICAT")%>"),"DB_PSCHEMA")"
Add a pump process in the JAgent host associated with the Extract process.
OdiOggCommand -OPERATION="ADDPUMP" -LSCHEMA=""<%=odiRef.getOggProcessLschemaName("EXTRACT")%>"" -NAME="<%=odiRef.getProcessInfo("<%=odiRef.getOggProcessLschemaName("REPLICAT")%>"),"NAME")"
Use this command to invoke an operating system command shell to carry out a command, and redirect the output result to files.
The following operating systems are supported:
Windows operating systems, using cmd
POSIX-compliant operating systems, using sh
The following operating system is not supported:
Mac OS
Usage
OdiOSCommand [-OUT_FILE=<stdout_file>] [-ERR_FILE=<stderr_file>] [-FILE_APPEND=<yes|no>] [-WORKING_DIR=<workingdir>] [-SYNCHRONOUS=<yes|no>] [CR/LF <command> | -COMMAND=<command>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Command to execute. For a multiline command, pass the whole command as raw text after the OdiOSCommand line without the |
|
No |
Absolute name of the file to redirect standard output to. |
|
No |
Absolute name of the file to redirect standard error to. |
|
No |
Whether to append to the output files, rather than overwriting them. The default value is Yes. |
|
No |
Directory in which the command is executed. |
|
No |
If set to Yes (default), the session waits for the command to terminate. If set to No, the session continues immediately with error code 0. The default is synchronous mode. |
|
No |
Use to capture some of the content that is written to the output stream and display in the Task Execution details in Operator. If set to ON_ERROR, the content will be captured only if the task fails. If set to ALL or NONE, either all or none of the output stream will be captured. Use NSTART and NEND to specify the number of lines to be captured (from the start and end). |
|
No |
Use to capture some of the content that is written to the error stream and display in the Task Error Message in Operator. If set to ON_ERROR, the content will be captured only if the task fails. If set to ALL or NONE, either all or none of the output stream will be captured. Use NSTART and NEND to specify the number of lines to be captured (from the start and end). |
Examples
Execute the file c:\work\load.bat
(on a Windows machine) and append the output streams to files.
OdiOSCommand "-OUT_FILE=c:\work\load-out.txt" "-ERR_FILE=c:\work\load-err.txt" "-FILE_APPEND=YES" "-WORKING_DIR=c:\work" c:\work\load.bat
Use this command to write or append content to a text file.
Usage
OdiOutFile -FILE=<file_name> [-APPEND] [-CHARSET_ENCODING=<encoding>] [-XROW_SEP=<hexadecimal_line_break>] [CR/LF <text> | -TEXT=<text>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Target file. The file location is always relative to the data schema directory of its logical schema. |
|
No |
Indicates whether |
|
No |
Target file encoding. The default value is
|
|
No |
Hexadecimal code of the character used as a line separator (line break). The default value is |
|
No |
Text to write in the file. This text can be typed on the line following the OdiOutFile command (a carriage return |
|
No |
Indicates if the output file is created in HDFS |
|
No |
Indicates if the file is located on a data server resolved based on the Logical Schema value. |
Examples
Generate the file /var/tmp/my_file.txt
on the UNIX system of the agent that executed it.
OdiOutFile -FILE=/var/tmp/my_file.txt Welcome to Oracle Data Integrator This file has been overwritten by <%=odiRef.getSession("SESS_NAME")%>
Add the entry PLUTON
into the file hosts of the Windows system of the agent that executed it.
OdiOutFile -FILE=C:\winnt\system32\drivers\etc\hosts -APPEND 195.10.10.6 PLUTON pluton
Use this command to perform a test on a given agent. If the agent is not started, this command raises an error.
Usage
OdiPingAgent -AGENT_NAME=<physical_agent_name>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Name of the physical agent to test. |
Examples
Test the physical agent AGENT_SOLARIS_DEV
.
OdiPingAgent -AGENT_NAME=AGENT_SOLARIS_DEV
Use this command to purge the execution logs.
The OdiPurgeLog tool purges all session logs and/or Load Plan runs that match the filter criteria.
The -PURGE_TYPE
parameter defines the objects to purge:
Select SESSION
to purge all session logs matching the criteria. Child sessions and grandchild sessions are purged if the parent session matches the criteria. Note that sessions launched by a Load Plan execution, including the child sessions, are not purged.
Select LOAD_PLAN_RUN
to purge all load plan logs matching the criteria. Note that all sessions launched from the Load Plan run are purged even if the sessions attached to the Load Plan runs themselves do not match the criteria.
Select ALL
to purge both session logs and Load Plan runs matching the criteria.
The -COUNT
parameter defines the number of sessions and/or Load Plan runs (after filter) to preserve in the log. The -ARCHIVE
parameter enables automatic archiving of the purged sessions and/or Load Plan runs.
Note:
Load Plans and sessions in running, waiting, or queued status are not purged.
Usage
OdiPurgeLog [-PURGE_TYPE=<SESSION|LOAD_PLAN_RUN|ALL>] [-COUNT=<session_number>] [-FROMDATE=<from_date>] [TODATE=<to_date>] [-CONTEXT_CODE=<context_code>] [-USER_NAME=<user_name>] [-AGENT_NAME=<agent_name>] [-PURGE_REPORTS=<Yes|No>] [-STATUS=<D|E|M>] [-NAME=<session_or_load_plan_name>] [-ARCHIVE=<Yes|No>] [-EXPORT_KEY=<key>] [-TODIR=<directory>] [-ZIPFILE_NAME=<zipfile_name>] [-XML_CHARSET=<charset>] [-JAVA_CHARSET=<charset>] [-REMOVE_TEMPORARY_OBJECTS=<yes|no>] [ARCHIVE_WITHOUT_CIPHER_DATA=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
No |
Purges only session logs, Load Plan logs, or both. The default is session. |
|
No |
Retains the most recent count number of sessions and/or Load Plan runs that match the specified filter criteria and purges the rest. If this parameter is not specified or equals 0, purges all sessions and/or Load Plan runs that match the filter criteria. |
|
No |
Starting date for the purge, using the format yyyy/MM/dd hh:mm:ss. If |
|
No |
Ending date for the purge, using the format yyyy/MM/dd hh:mm:ss. If |
|
No |
Purges only sessions and/or Load Plan runs executed in If |
|
No |
Purges only sessions and/or Load Plan runs launched by |
|
No |
Purges only sessions and/or Load Plan runs executed by |
|
No |
If set to 1, scenario reports (appearing under the execution node of each scenario) are also purged. |
|
No |
Purges only the sessions and/or Load Plan runs with the specified state:
If this parameter is not specified, sessions and/or Load Plan runs in all of these states are purged. |
|
No |
Session name or Load Plan name. |
|
No |
If set to Yes, exports the sessions and/or Load Plan runs before they are purged. |
|
NoFoot 22 |
Specifies a cryptographic private key used to encrypt sensitive cipher data. You must specify this key again when importing the exported object in order to import the cipher data. |
- |
NoFoot 23 |
When set to Yes, specifies that sensitive (cipher) values should be set to null in the object when it is archived. When set to No or when this parameter is omitted, you must include the |
|
No |
Target directory for the export. This parameter is required if |
|
No |
Name of the compressed file. Target directory for the export. This parameter is required if |
|
No |
XML encoding of the export files. The default value is
|
|
No |
Export file encoding. The default value is
|
|
No |
If set to Yes (default), cleanup tasks are performed before sessions are purged so that any temporary objects are removed. |
Footnote 22
If the -EXPORT_KEY parameter is not specified, the -ARCHIVE_WITHOUT_CIPHER_DATA parameter must be specified, and must be set to Yes.
Footnote 23
If -ARCHIVE_WITHOUT_CIPHER_DATA is not specified, or if it is specified and set to No, you must specify the -EXPORT_KEY parameter with a valid key value.
Examples
Purge all sessions executed between 2001/03/25 00:00:00 and 2001/08/31 21:59:00.
OdiPurgeLog "-FROMDATE=2001/03/25 00:00:00" "-TODATE=2001/08/31 21:59:00"
Purge all Load Plan runs that were executed in the GLOBAL
context by the Internal
agent and that are in Error status.
OdiPurgeLog "-PURGE_TYPE=LOAD_PLAN_RUN" "-CONTEXT_CODE=GLOBAL" "-AGENT_NAME=Internal" "-STATUS=E"
Use this command to read emails and attachments from a POP or IMAP account.
This command connects the mail server -MAILHOST
using the connection parameters specified by -USER
and -PASS
. The execution agent reads messages from the mailbox until -MAX_MSG
messages are received or the maximum waiting time specified by -TIMEOUT
is reached. The extracted messages must match the filters such as those specified by the parameters -SUBJECT
and -SENDER
. When a message satisfies these criteria, its content and its attachments are extracted in a directory specified by the parameter -FOLDER
. If the parameter -KEEP
is set to No, the retrieved message is suppressed from the mailbox.
Usage
OdiReadMail -MAILHOST=<mail_host> -USER=<mail_user> -PASS=<mail_user_password> -FOLDER=<folder_path> [-PROTOCOL=<pop3|imap>] [-FOLDER_OPT=<none|sender|subject>] [-KEEP=<no|yes>] [-EXTRACT_MSG=<yes|no>] [-EXTRACT_ATT=<yes|no>] [-MSG_PRF=<my_prefix>] [-ATT_PRF=<my_prefix>] [-USE_UCASE=<no|yes>] [-NOMAIL_ERROR=<no|yes>] [-TIMEOUT=<timeout>] [-POLLINT=<pollint>] [-MAX_MSG=<max_msg>] [-SUBJECT=<subject_filter>] [-SENDER=<sender_filter>] [-TO=<to_filter>] [-CC=<cc_filter>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
IP address of the POP or IMAP mail server. |
|
Yes |
Valid mail server account. |
|
Yes |
Password of the mail server account. |
|
Yes |
Full path of the storage folder for attachments and messages. |
|
No |
Type of mail accessed (POP3 or IMAP). The default is POP3. |
|
No |
Allows the creation of a subdirectory in the directory
For the |
|
No |
If set to Yes, keeps the messages that match the filters in the mailbox after reading them. If set to No (default), deletes the messages that match the filters of the mailbox after reading them. |
|
No |
If set to Yes (default), extracts the body of the message into a file. If set to No, does not extract the body of the message into a file. |
|
No |
If set to Yes (default), extracts the attachments into files. If set to No, does not extract attachments. |
|
No |
Prefix of the file that contains the body of the message. The default is MSG. |
- |
No |
Prefix of the files that contain the attachments. The original file names are kept. |
|
No |
If set to Yes, forces the file names to uppercase. If set to No (default), keeps the original letter case. |
|
No |
If set to Yes, generates an error when no mail matches the specified criteria. If set to No (default), does not generate an error when no mail corresponds to the specified criteria. |
|
No |
Maximum waiting time in milliseconds. If this waiting time is reached, the command ends. The default value is 0, which means an infinite waiting time (as long as needed for the maximum number of messages specified with |
|
No |
Searching interval in milliseconds to scan for new messages. The default value is 1000 (1 second). |
|
No |
Maximum number of messages to extract. If this number is reached, the command ends. The default value is 1. |
|
No |
Parameter used to filter the messages according to their subjects. |
|
No |
Parameter used to filter messages according to their sender. |
|
No |
Parameter used to filter messages according to their addresses. This option can be repeated to create multiple filters. |
|
No |
Parameter used to filter messages according to their addresses in copy. This option can be repeated to create multiple filters. |
Examples
Automatic reception of the mails of support with attachments detached in the folder C:\support
on the system of the agent. Wait for all messages with a maximum waiting time of 10 seconds.
OdiReadMail -MAILHOST=mail.mymail.com -USER=myaccount -PASS=mypass -KEEP=no -FOLDER=c:\support -TIMEOUT=0 -MAX_MSG=0 -SENDER=support@mycompany.com -EXTRACT_MSG=yes -MSG_PRF=TXT -EXTRACT_ATT=yes
Wait indefinitely for 10 messages and check for new messages every minute.
OdiReadMail -MAILHOST=mail.mymail.com -USER=myaccount -PASS=mypass -KEEP=no -FOLDER=c:\support -TIMEOUT=0 -MAX_MSG=10 -POLLINT=60000 -SENDER=support@mycompany.com -EXTRACT_MSG=yes -MSG_PRF=TXT -EXTRACT_ATT=yes
Use this command to refresh for a given journalizing subscriber the number of rows to consume for the given table list or CDC set. This refresh is performed on a logical schema and a given context, and may be limited.
Note:
This command is suitable for journalized tables in simple or consistent mode and cannot be executed in a command line with startcmd
.
Usage
OdiRefreshJournalCount -LSCHEMA=<logical_schema> -SUBSCRIBER_NAME=<subscriber_name> (-TABLE_NAME=<table_name> | -CDC_SET_NAME=<cdc set name>) [-CONTEXT=<context>] [-MAX_JRN_DATE=<to_date>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Logical schema containing the journalized tables. |
|
Yes for working with simple CDC |
Journalized table name, mask, or list to check. This parameter accepts three formats:
Note that this option works only for tables in a model journalized in simple mode. This parameter cannot be used with |
|
Yes for working with consistent set CDC |
Name of the CDC set to check. Note that this option works only for tables in a model journalized in consistent mode. This parameter cannot be used with |
|
Yes |
Name of the subscriber for which the count is refreshed. |
|
No |
Context in which the logical schema will be resolved. If no context is specified, the execution context is used. |
|
No |
Date (and time) until which the journalizing events are taken into account. |
Examples
Refresh for the CUSTOMERS
table in the SALES_APPLICATION
schema the count of modifications recorded for the SALES_SYNC
subscriber. This datastore is journalized in simple mode.
OdiRefreshJournalCount -LSCHEMA=SALES_APPLICATION -TABLE_NAME=CUSTOMERS -SUBSCRIBER_NAME=SALES_SYNC
Refresh for all tables from the SALES
CDC set in the SALES_APPLICATION
schema the count of modifications recorded for the SALES_SYNC
subscriber. These datastores are journalized with consistent set CDC.
OdiRefreshJournalCount -LSCHEMA=SALES_APPLICATION -SUBSCRIBER_NAME=SALES_SYNC -CDC_SET_NAME=SALES
Use this command to reinitialize an Oracle Data Integrator sequence.
Usage
OdiReinitializeSeq -SEQ_NAME=<sequence_name> -CONTEXT=<context> -STD_POS=<position>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Name of the sequence to reinitialize. It must be prefixed with |
|
Yes |
Context in which the sequence must be reinitialized. |
|
Yes |
Position to which the sequence must be reinitialized. |
Examples
Reset the global sequence SEQ_I
to 0 for the GLOBAL
context.
OdiReinitializeSeq -SEQ_NAME=GLOBAL.SEQ_I -CONTEXT=GLOBAL -STD_POS=0
Use this command to remove temporary objects that could remain between executions. This is performed by executing the cleanup tasks for the sessions identified by the parameters specified in the tool parameters.
Usage
OdiRemoveTemporaryObjects [-COUNT=<session_number>] [-FROMDATE=<from_date>] [-TODATE=<to_date>] [-CONTEXT_CODE=<context_code>] [-AGENT_NAME=<agent_name>] [-USER_NAME=<user_name>] [-NAME=<session_name>] [-ERRORS_ALLOWED=<number_of_errors_allowed>]
Parameter
Parameters | Mandatory | Description |
---|---|---|
|
No |
Number of sessions to skip cleanup for. The most recent number of sessions (< |
|
No |
Start date for the cleanup, using the format yyyy/MM/dd hh:mm:ss. All sessions started after this date are cleaned up. If |
|
No |
End date for the cleanup, using the format yyyy/MM/dd hh:mm:ss. All sessions started before this date are cleaned up. If |
|
No |
Cleans up only those sessions executed in this context ( |
|
No |
Cleans up only those sessions executed by this agent ( |
|
No |
Cleans up only those sessions launched by this user ( |
|
No |
Session name. |
|
No |
Number of errors allowed before the step ends with OK. If set to 0, the step ends with OK regardless of the number of errors encountered during the cleanup phase. |
Examples
Remove the temporary objects by performing the cleanup tasks of all sessions executed between 2013/03/25 00:00:00 and 2013/08/31 21:59:00.
OdiRemoveTemporaryObjects "-FROMDATE=2013/03/25 00:00:00" "-TODATE=2013/08/31 21:59:00"
Remove the temporary objects by performing the cleanup tasks of all sessions executed in the GLOBAL
context by the Internal
agent.
OdiRemoveTemporaryObjects "-CONTEXT_CODE=GLOBAL" "-AGENT_NAME=Internal"
Use this command to retrieve log information from executions in an Oozie execution agent.
Usage
OdiRetrieveHadoopLog [-SESSION_LIST=<session-ids>] -POLLINT=<poll> -TIMEOUT=<timeout>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
No |
A comma separated list of sessions IDs to be retrieved. If blank, all Oozie sessions currently running will be retrieved. |
|
No |
The length of time between each instance when the log data is retrieved. Can be in secs (s), mins (m), hours (h), days (d), or years (y). If zero, the log data will be retrieved once and then the tool will end. |
|
No |
The maximum period of time that the tool will execute for. Can be in secs(s), mins(m), hours(h), days(d) or years(h). If zero, the log will be polled an retrieved according to the poll interval and will end when no sessions are candidates for retrieval |
Examples
Perform a one time retrieval of the Hadoop Log for the current session if it is being executed in an Oozie execution engine.
OdiRetrieveHadopLog -SESSION_LIST=<?=odiRef.getSession("SESS_NO")?>
Use this command to retrieve the journalized events for a given journalizing subscriber, a given table list or CDC set. The retrieval is performed specifically for the technology containing the tables. This retrieval is performed on a logical schema and a given context.
Note:
This tool works for tables journalized using simple or consistent set modes and cannot be executed in a command line with startcmd
.
Usage
OdiRetrieveJournalData -LSCHEMA=<logical_schema> -SUBSCRIBER_NAME=<subscriber_name> (-TABLE_NAME=<table_name> | -CDC_SET_NAME=<cdc_set_name>) [-CONTEXT=<context>] [-MAX_JRN_DATE=<to_date>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Logical schema containing the journalized tables. |
|
No |
Journalized table name, mask, or list to check. This parameter accepts three formats:
Note that this option works only for tables in a model journalized in simple mode. This parameter cannot be used with |
|
No |
Name of the CDC set to update. Note that this option works only for tables in a model journalized in consistent mode. This parameter cannot be used with |
|
Yes |
Name of the subscriber for which the data is retrieved. |
|
No |
Context in which the logical schema will be resolved. If no context is specified, the execution context is used. |
|
No |
Date (and time) until which the journalizing events are taken into account. |
Examples
Retrieve for the CUSTOMERS
table in the SALES_APPLICATION
schema the journalizing events for the SALES_SYNC
subscriber.
OdiRetrieveJournalData -LSCHEMA=SALES_APPLICATION -TABLE_NAME=CUSTOMERS -SUBSCRIBER_NAME=SALES_SYNC
Use this command to reverse-engineer metadata for the given model in the reverse tables using the JDBC driver capabilities. This command is typically preceded by OdiReverseResetTable and followed by OdiReverseSetMetaData.
Note:
This command uses the same technique as the standard reverse-engineering, and depends on the capabilities of the JDBC driver used.
The use of this command is restricted to DEVELOPMENT type Repositories because the metadata is not available on EXECUTION type Repositories.
Usage
OdiReverseGetMetaData -MODEL=<model_id>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Model to reverse-engineer. |
Examples
Reverse the RKM's current model.
OdiReverseGetMetaData -MODEL=<%=odiRef.getModel("ID")%>
Use this command to define how to handle shortcuts when they are reverse-engineered in a model.
Usage
OdiReverseManageShortcut "-MODEL=<model_id>" "-MODE=MATERIALIZING_MODE"
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Global identifier of the model to be reversed. |
|
Yes |
This parameter is supported only when a package or scenario is run in ODI Studio. This parameter accepts the following values:
|
Examples
Reverse model 44fa5543-a378-4442-ac64-3dabab65ef98 in ALWAYS_MATERIALIZE
mode.
OdiReverseManageShortcut -MODEL=44fa5543-a378-4442-ac64-3dabab65ef98 -MODE=ALWAYS_MATERIALIZE
Use this command to reset the content of reverse tables for a given model. This command is typically used at the beginning of a customized reverse-engineering process.
Usage
OdiReverseResetTable -MODEL=<model_id>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Global identifier of the model to be reversed. |
Examples
OdiReverseResetTable -MODEL=44fa5543-a378-4442-ac64-3dabab65ef98
Use this command to integrate metadata from the reverse tables into the Repository for a given data model.
Usage
OdiReverseSetMetaData -MODEL=<model_id> [-USE_TABLE_NAME_FOR_UPDATE=<true|false>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Global identifier of the model to be reversed. |
|
No |
|
Example
Reverse model 125880
, using the TABLE_NAME
as an update key on the target tables.
OdiReverseSetMetaData -MODEL=44fa5543-a378-4442-ac64-3dabab65ef98 -USE_TABLE_NAME_FOR_UPDATE=true
Use this command to rollback a Patch Deployment Archive (DA) from an ODI repository.
Usage
OdiRollbackDeploymentArchive -ROLLBACK_FILE_NAME=<rollback_file_name> [-APPLY_WITHOUT_CIPHER_DATA=<yes|no>] [-EXPORT_KEY=<Export_Key>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Complete file name of the rollback deployment archive. |
|
NoFoot 24 |
If set to Yes, any cipher data present in the deployment archive will be made null. If set to No, the export key will be used to migrate the cipher data. The default value is No. |
|
No |
Specifies a cryptographic private key used to migrate cipher data in the deployment archive objects. |
Footnote 24
If the APPLY_WITHOUT_CIPHER_DATA
parameter is set to No, the EXPORT_KEY
parameter must be specified.
Examples
Rollback the last applied patch deployment archive with export key.
OdiRollbackDeploymentArchive -ROLLBACK_FILE_NAME=rollback_file_name -APPLY_WITHOUT_CIPHER_DATA=no -EXPORT_KEY=Export_Key
Use this command to generate SAP Internal Documents (IDoc) from XML source files and transfer these IDocs using ALE (Application Link Enabling) to a remote tRFC server (SAP R/3 server).
Note:
The OdiSAPALEClient tool supports SAP Java Connector 2.x. To use the SAP Java Connectors 3.x, use the OdiSAPALEClient3 tool.
Usage
OdiSAPALEClient -USER=<sap_logon> -ENCODED_PASSWORD=<password> -GATEWAYHOST=<gateway_host> -SYSTEMNR=<system_number> -MESSAGESERVERHOST=<message_server> -R3NAME=<system_name> -APPLICATIONSERVERSGROUP=<group_name> [-DIR=<directory>] [-FILE=<file>] [-CASESENS=<yes|no>] [-MOVEDIR=<target_directory>] [-DELETE=<yes|no>] [-POOL_KEY=<pool_key>] [-LANGUAGE=<language>] [-CLIENT=<client>] [-MAX_CONNECTIONS=<n>] [-TRACE=<no|yes>]
Usage for OdiSAPALEClient3
OdiSAPALEClient3 -USER=<sap_logon> -ENCODED_PASSWORD=<password> -GATEWAYHOST=<gateway_host> -SYSTEMNR=<system_number> -MESSAGESERVERHOST=<message_server> -R3NAME=<system_name> -APPLICATIONSERVERSGROUP=<group_name> [-DIR=<directory>] [-FILE=<file>] [-CASESENS=<yes|no>] [-MOVEDIR=<target_directory>] [-DELETE=<yes|no>] [-POOL_KEY=<pool_key>] [-LANGUAGE=<language>] [-CLIENT=<client>] [-MAX_CONNECTIONS=<n>] [-TRACE=<no|yes>]
Parameter
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
SAP logon. This user may be a system user. |
|
Deprecated |
SAP logon password. This command is deprecated. Use |
|
Yes |
SAP logon password, encrypted. The OS command |
|
No |
Gateway host, mandatory if |
|
No |
SAP system number, mandatory if |
|
No |
Message server host name, mandatory if |
|
No |
Name of the SAP system (r3name), mandatory if |
|
No |
Application servers group name, mandatory if |
|
No |
XML source file directory. This parameter is taken into account if |
|
No |
Name of the source XML file. If this parameter is omitted, all files in |
|
No |
Indicates if the source file names are case-sensitive. The default value is No. |
|
No |
If this parameter is specified, the source files are moved to this directory after being processed. |
|
No |
Deletes the source files after their processing. The default value is Yes. |
|
No |
Name of the connection pool. The default value is |
|
No |
Language code used for error messages. The default value is |
|
No |
Client identifier. The default value is |
|
No |
Maximum number of connections in the pool. The default value is |
|
No |
The generated IDoc files are archived in the source file directory. If the source files are moved ( |
Examples
Process all files in the /sap
directory and send them as IDocs to the SAP server. The original XML and generated files are stored in the /log
directory after processing.
OdiSAPALEClient -USER=ODI -ENCODED_PASSWORD=xxx -SYSTEMNR=002 -GATEWAYHOST=GW001 -DIR=/sap -MOVEDIR=/log -TRACE=yes
Use this command to start a tRFC listener to receive SAP IDocs transferred using ALE (Application Link Enabling). This listener transforms incoming IDocs into XML files in a given directory.
Note:
The OdiSAPALEServer tool supports SAP Java Connector 2.x. To use the SAP Java Connectors 3.x, use the OdiSAPALEServer3 tool.
Usage
OdiSAPALEServer -USER=<sap_logon> -ENCODED_PASSWORD=<password> -GATEWAYHOST=<gateway_host> -SYSTEMNR=<system_number> -GATEWAYNAME=<gateway_name> -PROGRAMID=<program_id> -DIR=<target_directory> [-TIMEOUT=<n>] [-POOL_KEY=<pool_key>] [-LANGUAGE=<Language>] [-CLIENT=<client>] [-MAX_CONNECTIONS=<n>] [-INTERREQUESTTIMEOUT=<n>] [-MAXREQUEST=<n>] [-TRACE=<no|yes>]
Usage of OdiSAPALEServer3
OdiSAPALEServer3 -USER=<sap_logon> -ENCODED_PASSWORD=<password> -GATEWAYHOST=<gateway_host> -SYSTEMNR=<system_number> -GATEWAYNAME=<gateway_name> -PROGRAMID=<program_id> -DIR=<target_directory> [-TIMEOUT=<n>] [-POOL_KEY=<pool_key>] [-LANGUAGE=<Language>] [-CLIENT=<client>] [-MAX_CONNECTIONS=<n>] [-INTERREQUESTTIMEOUT=<n>] [-MAXREQUEST=<n>] [-TRACE=<no|yes>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
SAP logon. This user may be a system user. |
|
Yes |
SAP logon password, encrypted. The system command |
|
Yes |
Gateway host. |
|
Yes |
SAP system number. |
|
Yes |
Gateway name. |
|
Yes |
The program ID. External name used by the tRFC server. |
|
Yes |
Directory in which the target XML files are stored. These files are named |
|
Yes |
Name of the connection pool. The default value is |
- |
Yes |
Language code used for error messages. The default value is |
|
Yes |
SAP client identifier. The default value is |
|
No |
Life span in milliseconds for the server. At the end of this period, the server stops automatically. If this timeout is set to 0, the server life span is infinite. The default value is 0. |
|
Yes |
Maximum number of connections allowed for the pool of connections. The default value is 3. |
|
No |
If no IDOC is received during an interval of |
|
No |
Maximum number of requests after which the listener stops. If this parameter is set to 0, the server expects an infinite number of requests. The default value is 0. Note: If |
|
No |
Activate the debug trace. The default value is No. |
- |
No |
Must match the RFC destination in SAP. Verify that the Unicode setting in SAP transaction SM59 matches this parameter. Note: Applies to OdiSAPALEServer3 only. |
Examples
Wait for 2 IDoc files and generate the target XML files in the /temp
directory.
OdiSAPALEServer -POOL_KEY=ODI -MAX_CONNECTIONS=3 -CLIENT=001 -USER=ODI -ENCODED_PASSWORD=xxx -LANGUAGE=EN -GATEWAYHOST=SAP001 -SYSTEMNR=002 -GATEWAYNAME=GW001 -PROGRAMID=ODI01 -DIR=/tmp -MAXREQUEST=2
Use this command to download a file from an SSH server.
Usage
OdiScpGet -HOST=<ssh server host name> -USER=<ssh user> [-PASSWORD=<ssh user password>] -REMOTE_DIR=<remote dir on ssh host> [-REMOTE_FILE=<file name under the REMOTE_DIR>] -LOCAL_DIR=<local dir> [-LOCAL_FILE=<file name under the LOCAL_DIR>] [-TIMEOUT=<time in seconds>] [-IDENTITY_FILE=<full path to the private key file of the user>] [-KNOWNHOSTS_FILE=<full path to known hosts file>] [COMPRESSION=<yes|no>] [-STRICT_HOSTKEY_CHECKING=<yes|no>] [-PROXY_HOST=<proxy server host name>] [-PROXY_PORT=<proxy server port>] [-PROXY_TYPE=<HTTP|SOCKS5>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Host name of the SSH server. |
|
Yes |
User on the SSH server. |
|
No |
The password of the SSH user or the passphrase of the password-protected identity file. If the |
|
Yes |
Directory path on the remote SSH host. |
|
No |
File name under the directory specified in the If this argument is missing, the file is copied with the |
|
Yes |
Directory path on the local machine. |
|
No |
File name under the directory specified in the To filter the files to be copied, use Examples:
|
|
No |
Private key file of the local user. If this argument is specified, public key authentication is performed. The |
|
No |
Full path to the known hosts file on the local machine. The known hosts file contains the host keys of all remote machines that the user trusts. If this argument is missing, the |
|
No |
If set to Yes, data compression is used. The default value is No. |
|
No |
If set to Yes (default), strict host key checking is performed and authentication fails if the remote SSH host key is not present in the known hosts file specified in |
|
No |
Host name of the proxy server to be used for the connection. |
|
No |
Port number of the proxy server. |
|
No |
Type of proxy server you are connecting to, HTTP or SOCKS5. |
|
No |
Time in seconds after which the socket connection times out. |
Examples
Copy the remote directory /test_copy555
on the SSH server recursively to the local directory C:\temp\test_copy
.
OdiScpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555
Copy all files matching the Sales*.txt
pattern under the remote directory /
on the SSH server to the local directory C:\temp\
.
OdiScpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -REMOTE_FILE=Sales*.txt -REMOTE_DIR=/
Copy the Sales1.txt
file under the remote directory /
on the SSH server to the local directory C:\temp\
as a Sample1.txt
file.
OdiScpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -REMOTE_DIR=/ REMOTE_FILE=Sales1.txt -LOCAL_DIR=C:\temp -LOCAL_FILE=Sample1.txt
Copy the Sales1.txt
file under the remote directory /
on the SSH server to the local directory C:\temp\
as a Sample1.txt
file. Public key authentication is performed by providing the path to the identity file and the path to the known hosts file.
OdiScpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -REMOTE_DIR=/ -REMOTE_FILE=Sales1.txt -LOCAL_DIR=C:\temp -LOCAL_FILE=Sample1.txt -IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa -KNOWNHOSTS_FILE=C:\Documents and Settings\username\.ssh\known_hosts
Copy the Sales1.txt
file under the remote directory /
on the SSH server to the local directory C:\temp\
as a Sample1.txt
file. Public key authentication is performed by providing the path to the identity file. All hosts are trusted by passing the No value to the -STRICT_HOSTKEY_CHECKING
parameter.
OdiScpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -REMOTE_DIR=/ -REMOTE_FILE=Sales1.txt -LOCAL_DIR=C:\temp -LOCAL_FILE=Sample1.txt -IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa -STRICT_HOSTKEY_CHECKING=NO
Use this command to upload a file to an SSH server.
Usage
OdiScpPut -HOST=<SSH server host name> -USER=<SSH user> [-PASSWORD=<SSH user password>] -LOCAL_DIR=<local dir> [-LOCAL_FILE=<file name under the LOCAL_DIR>] -REMOTE_DIR=<remote dir on ssh host> [-REMOTE_FILE=<file name under the REMOTE_DIR>] [-TIMEOUT=<time in seconds>] [-IDENTITY_FILE=<full path to the private key file of the user>] [-KNOWNHOSTS_FILE=<full path to known hosts file>] [-COMPRESSION=<yes|no>] [-STRICT_HOSTKEY_CHECKING=<yes|no>] [<-PROXY_HOST=<proxy server host name>] [-PROXY_PORT=<proxy server port>] [-PROXY_TYPE=<HTTP|SOCKS5>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Host name of the SSH server. |
|
Yes |
User on the SSH server. |
|
No |
Password of the SSH user or the passphrase of the password-protected identity file. If the |
|
Yes |
Directory path on the remote SSH host. |
|
No |
File name under the directory specified in the |
|
Yes |
Directory path on the local machine. |
|
No |
File name under the directory specified in the To filter the files to be copied, use Examples:
|
|
No |
Private key file of the local user. If this argument is specified, public key authentication is performed. The |
|
No |
Full path to the known hosts file on the local machine. The known hosts file contains the host keys of all remote machines the user trusts. If this argument is missing, the |
|
No |
If set to Yes, data compression is used. The default value is No. |
|
No |
If set to Yes (default), strict host key checking is performed and authentication fails if the remote SSH host key is not present in the known hosts file specified in |
|
No |
Host name of the proxy server to be used for the connection. |
|
No |
Port number of the proxy server. |
|
No |
Type of proxy server you are connecting to, HTTP or SOCKS5. |
|
No |
Time in seconds after which the socket connection times out. |
Examples
Copy the local directory C:\temp\test_copy
recursively to the remote directory /test_copy555
on the SSH server.
OdiScpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555
Copy all files matching the Sales*.txt
pattern under the local directory C:\temp\
to the remote directory /
on the SSH server.
OdiScpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales*.txt -REMOTE_DIR=/
Copy the Sales1.txt
file under the local directory C:\temp\
to the remote directory /
on the SSH server as a Sample1.txt
file.
OdiScpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/ -REMOTE_FILE=Sample1.txt
Copy the Sales1.txt
file under the local directory C:\temp\
to the remote directory /
on the SSH server as a Sample1.txt
file. Public key authentication is performed by providing the path to the identity file and the path to the known hosts file.
OdiScpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/ -REMOTE_FILE=Sample1.txt -IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa -KNOWNHOSTS_FILE=C:\Documents and Settings\username\.ssh\known_hosts
Copy the Sales1.txt
file under the local directory C:\temp\
to the remote directory /
on the SSH server as a Sample1.txt
file. Public key authentication is performed by providing the path to the identity file. All hosts are trusted by passing the No value to the -STRICT_HOSTKEY_CHECKING
parameter.
OdiScpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/ -REMOTE_FILE=Sample1.txt -IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa -STRICT_HOSTKEY_CHECKING=NO
Use this command to send an email to an SMTP server.
Usage
OdiSendMail -MAILHOST=<mail_host> -FROM=<from_user> -TO=<address_list> [-CC=<address_list>] [-BCC=<address_list>] [-SUBJECT=<subject>] [-ATTACH=<file_path>]* [-PORT=<PortNumber>] [-PROTOCOL=<MailProtocol>] [-AUTH=<Yes|No>] [-AUTHMECHANISM=<MailAuthMechanism] [-USER=<Username>] [-PASS=<Password>] [-MSGBODY=<message_body> | CR/LF<message_body>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
IP address of the SMTP server. |
|
Yes |
Address of the sender of the message. Example: To send the external name of the sender, the following notation can be used:
|
|
Yes |
List of email addresses of the recipients, separated by commas. Example:
|
|
No |
List of e-mail addresses of the CC-ed recipients, separated by commas. Example:
|
|
No |
List of email-addresses of the BCC-ed recipients, separated by commas. Example:
|
|
No |
Object (subject) of the message. |
|
No |
Path of the file to join to the message, relative to the execution agent. To join several files, repeat Example: Attach the files
|
or |
No |
Message body (text). This text can be typed on the line following the OdiSendMail command (a carriage return |
- |
No |
The Port number of the mail server. Default is
|
- |
No |
E-mail protocol. It can be SMTP or POP3. Default is SMTP. |
- |
No |
If authentication is to be used. The values are YES or NO. Default is NO. |
- |
No |
The authentication mechanism supported by the mail server. The values are PLAIN, LOGIN or DIGEST-MD5. |
- |
No |
User for authentication. Only if authentication is used. |
- |
No |
Password for authentication. Only if authentication is used. |
Examples
OdiSendMail -MAILHOST=mail.mymail.com "-FROM=Application Oracle Data Integrator<odi@mymail.com>" -TO=admin@mymail.com "-SUBJECT=Execution OK" -ATTACH=C:\log\job.log -ATTACH=C:\log\job.bad Hello Administrator ! Your process finished successfully. Attached are your files. Have a nice day! Oracle Data Integrator.
Use this command to connect to an SSH server with an enabled SFTP subsystem and perform standard FTP commands on the remote system. Trace from the script is recorded against the Execution Details of the task representing the OdiSftp step in Operator Navigator.
Usage
OdiSftp -HOST=<ssh server host name> -USER=<ssh user> [-PASSWORD=<ssh user password>] -LOCAL_DIR=<local dir> -REMOTE_DIR=<remote dir on ssh host> [-TIMEOUT=<time in seconds>] [-IDENTITY_FILE=<full path to private key file of user>] [-KNOWNHOSTS_FILE=<full path to known hosts file on local machine>] [-COMPRESSION=<yes|no>] [-STRICT_HOSTKEY_CHECKING=<yes|no>] [-PROXY_HOST=<proxy server host name>] [-PROXY_PORT=<proxy server port>] [-PROXY_TYPE=<HTTP|SOCKS5>] [STOP_ON_FTP_ERROR=<yes|no>] -COMMAND=<command>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Host name of the SSH server. |
|
Yes |
User on the SSH server. |
|
No |
Password of the SSH user. |
|
Yes |
Directory path on the local machine. |
|
Yes |
Directory path on the remote SSH host. |
|
No |
Time in seconds after which the socket connection times out. |
|
No |
Private key file of the local user. If specified, public key authentication is performed. The |
|
No |
Full path to the known hosts file on the local machine. The known hosts file contains host keys for all remote machines trusted by the user. If this argument is missing, the |
|
No |
If set to Yes, data compression is used. The default value is No. |
|
No |
If set to Yes (default), strict host key checking is performed and authentication fails if the remote SSH host key is not present in the known hosts file specified in |
|
No |
Host name of the proxy server to be used for the connection. |
|
No |
Port number of the proxy server. |
|
No |
Type of proxy server you are connecting to, HTTP or SOCKS5. |
|
No |
If set to Yes (default), the step stops with an Error status if an error occurs rather than running to completion. |
|
Yes |
Raw FTP command to execute. For a multiline command, pass the whole command as raw text after the OdiSftp line without the Supported commands:
|
Examples
Execute a script on a remote host that changes directory into a directory, deletes a file from the directory, changes directory into the parent directory, and removes the directory.
OdiSftp -HOST=machine.oracle.com -USER=odiftpuser -PASSWORD=<password> -LOCAL_DIR=/tmp -REMOTE_DIR=/tmp -STOP_ON_FTP_ERROR=No CWD /tmp/ftpToolDir1 DELE ftpToolFile CDUP RMD ftpToolDir1
Use this command to download a file from an SSH server with an enabled SFTP subsystem.
Usage
OdiSftpGet -HOST=<ssh server host name> -USER=<ssh user> [-PASSWORD=<ssh user password>] -REMOTE_DIR=<remote dir on ssh host> [-REMOTE_FILE=<file name under REMOTE_DIR>] -LOCAL_DIR=<local dir> [-LOCAL_FILE=<file name under LOCAL_DIR>] [-TIMEOUT=<time in seconds>] [-IDENTITY_FILE=<full path to private key file of user>] [-KNOWNHOSTS_FILE=<full path to known hosts file on local machine>] [-COMPRESSION=<yes|no>] [-STRICT_HOSTKEY_CHECKING=<yes|no>] [-PROXY_HOST=<proxy server host name>] [-PROXY_PORT=<proxy server port>] [-PROXY_TYPE=<HTTP|SOCKS5>]
Note:
If a Local or Remote file name needs to have % as part of its name, %25 needs to be passed instead of just %.
%25 will resolve automatically to %.
For example, if file name needs to be temp%result
, it should be passed as REMOTE_FILE=temp%25result
or -LOCAL_FILE=temp%25result
.
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Host name of the SSH server. You can add the port number to the host name by prefixing it with a colon ( If no port is specified, port 22 is used by default. |
|
Yes |
User on the SSH server. |
|
No |
Password of the SSH user. |
|
Yes |
Directory path on the remote SSH host. |
|
No |
File name under the directory specified in the |
|
Yes |
Directory path on the local machine. |
|
No |
File name under the directory specified in the To filter the files to be copied, use Examples:
|
|
No |
Private key file of the local user. If this argument is specified, public key authentication is performed. The |
|
No |
The full path to the known hosts file on the local machine. The known hosts file contains the host keys of all remote machines the user trusts. If this argument is missing, the |
|
No |
If set to Yes, data compression is used. The default value is No. |
|
No |
If set to Yes (default), strict host key checking is performed and authentication fails if the remote SSH host key is not present in the known hosts file specified in |
|
No |
Host name of the proxy server to be used for the connection. |
|
No |
Port number of the proxy server. |
|
No |
Type of proxy server you are connecting to, HTTP or SOCKS5. |
|
No |
Time in seconds after which the socket connection times out. |
Examples
Copy the remote directory /test_copy555
on the SSH server recursively to the local directory C:\temp\test_copy
.
OdiSftpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555
Copy all files matching the Sales*.txt
pattern under the remote directory /
on the SSH server to the local directory C:\temp\
.
OdiSftpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -REMOTE_FILE=Sales*.txt -REMOTE_DIR=/
Copy the Sales1.txt
file under the remote directory / on the SSH server to the local directory C:\temp\
as a Sample1.txt
file.
OdiSftpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -REMOTE_DIR=/ -LOCAL_FILE=Sales1.txt -LOCAL_DIR=C:\temp -LOCAL_FILE=Sample1.txt
Copy the Sales1.txt
file under the remote directory /
on the SSH server to the local directory C:\temp\
as a Sample1.txt
file. Public key authentication is performed by providing the path to the identity file and the path to the known hosts file.
OdiSftpGet -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -REMOTE_DIR=/ -REMOTE_FILE=Sales1.txt -LOCAL_DIR=C:\temp -LOCAL_FILE=Sample1.txt -IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa -KNOWNHOSTS_FILE=C:\Documents and Settings\username\.ssh\known_hosts
Copy the Sales1.txt
file under the remote directory /
on the SSH server to the local directory C:\temp\
as a Sample1.txt
file. Public key authentication is performed by providing the path to the identity file. All hosts are trusted by passing the No value to the -STRICT_HOSTKEY_CHECKING
parameter.
OdiSftpGet -HOST=dev3 -USER=test_ftp -PASSWORD=<password> -REMOTE_DIR=/ -REMOTE_FILE=Sales1.txt -LOCAL_DIR=C:\temp -LOCAL_FILE=Sample1.txt -IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa -STRICT_HOSTKEY_CHECKING=NO
Use this command to upload a file to an SSH server with the SFTP subsystem enabled.
Usage
OdiSftpPut -HOST=<ssh server host name> -USER=<ssh user> [-PASSWORD=<ssh user password>] -LOCAL_DIR=<local dir> [-LOCAL_FILE=<file name under LOCAL_DIR>] -REMOTE_DIR=<remote dir on ssh host> [-REMOTE_FILE=<file name under REMOTE_DIR>] [-TIMEOUT=<time in seconds>] [-IDENTITY_FILE=<full path to private key file of user>] [-KNOWNHOSTS_FILE=<full path to known hosts file on local machine>] [-COMPRESSION=<yes|no>] [-STRICT_HOSTKEY_CHECKING=<yes|no>] [-PROXY_HOST=<proxy server host name>] [-PROXY_PORT=<proxy server port>] [-PROXY_TYPE=<HTTP|SOCKS5>]
Note:
If a Local or Remote file name needs to have % as part of its name, %25 needs to be passed instead of just %.
%25 will resolve automatically to %.
For example, if file name needs to be temp%result
, it should be passed as REMOTE_FILE=temp%25result
or -LOCAL_FILE=temp%25result
.
Parameter
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Host name of the SSH server. You can add the port number to the host name by prefixing it with a colon ( If no port is specified, port 22 is used by default. |
|
Yes |
User on the SSH server. |
|
No |
Password of the SSH user or the passphrase of the password-protected identity file. If the |
|
Yes |
Directory path on the remote SSH host. |
|
No |
File name under the directory specified in the |
|
Yes |
Directory path on the local machine. |
|
No |
File name under the directory specified in the To filter the files to be copied, use Examples:
|
|
No |
Private key file of the local user. If this argument is specified, public key authentication is performed. The |
|
No |
Full path to the known hosts file on the local machine. The known hosts file contains the host keys of all remote machines the user trusts. If this argument is missing, the |
|
No |
If set to Yes, data compression is used. The default value is No. |
|
No |
If set to Yes (default), strict host key checking is performed and authentication fails if the remote SSH host key is not present in the known hosts file specified in |
|
No |
Host name of the proxy server to be used for the connection. |
|
No |
Port number of the proxy server. |
|
No |
Type of proxy server you are connecting to, HTTP or SOCKS5. |
|
No |
Time in seconds after which the socket connection times out. |
Examples
Copy the local directory C:\temp\test_copy
recursively to the remote directory /test_copy555
on the SSH server.
OdiSftpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp\test_copy -REMOTE_DIR=/test_copy555
Copy all files matching the Sales*.txt
pattern under the local directory C:\temp\
to the remote directory /
on the SSH server.
OdiSftpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales*.txt -REMOTE_DIR=/
Copy the Sales1.txt
file under the local directory C:\temp\
to the remote directory /
on the SSH server as a Sample1.txt
file.
OdiSftpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/Sample1.txt
Copy the Sales1.txt
file under the local directory C:\temp\
to the remote directory /
on the SSH server as a Sample1.txt
file. Public key authentication is performed by providing the path to the identity file and the path to the known hosts file.
OdiSftpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/Sample1.txt -IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa -KNOWNHOSTS_FILE=C:\Documents and Settings\username\.ssh\known_hosts
Copy the Sales1.txt
file under the local directory C:\temp\
to the remote directory /
on the SSH server as a Sample1.txt
file. Public key authentication is performed by providing the path to the identity file. All hosts are trusted by passing the No value to the -STRICT_HOSTKEY_CHECKING
parameter.
OdiSftpPut -HOST=machine.oracle.com -USER=test_ftp -PASSWORD=<password> -LOCAL_DIR=C:\temp -LOCAL_FILE=Sales1.txt -REMOTE_DIR=/Sample1.txt -IDENTITY_FILE=C:\Documents and Settings\username\.ssh\id_dsa -STRICT_HOSTKEY_CHECKING=NO
Use this command to wait for <delay>
milliseconds.
Usage
OdiSleep -DELAY=<delay>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Number of milliseconds to wait. |
Examples
OdiSleep -DELAY=5000
Use this command to write the result of a SQL query to a file.
This command executes the SQL query <sql_query>
on the data server whose connection parameters are provided by <driver>
, <url>
, <user>
, and <encoded_pass>
. The resulting resultset is written to <file_name>
.
Usage
OdiSqlUnload -FILE=<file_name> -DRIVER=<driver> -URL=<url> -USER=<user> -PASS=<password> [-FILE_FORMAT=<file_format>] [-FIELD_SEP=<field_sep> | -XFIELD_SEP=<field_sep>] [-ROW_SEP=<row_sep> | -XROW_SEP=<row_sep>] [-DATE_FORMAT=<date_format>] [-CHARSET_ENCODING=<encoding>] [-XML_CHARSET_ENCODING=<encoding>] [-FETCH_SIZE=<array_fetch_size>] ( CR/LF <sql_query> | -QUERY=<sql_query> | -QUERY_FILE=<sql_query_file> )
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Full path to the output file, relative to the execution agent. |
|
Yes |
Name of the JDBC driver used to connect to the data server. |
|
Yes |
JDBC URL to the data server. |
|
Yes |
Login of the user on the data server that will be used to run the SQL query. |
|
Yes |
Encrypted password for the login to the data server. This password can be encrypted with the system command Note that |
|
No |
Specifies the file format with one of the following three values:
If If
...
....
|
|
No |
Field separator character in ASCII format if |
|
No |
Field separator character in hexadecimal format if |
|
No |
Record separator character in ASCII format. The default
|
|
No |
Record separator character in hexadecimal format. Example: |
|
No |
Output format used for date datatypes. This date format is specified using the Java date and time format patterns. For a list of these patterns, see: |
|
No |
Target file encoding. The default value is
|
|
No |
Encoding specified in the XML file, in the tag
|
|
No |
Number of rows (records read) requested by Oracle Data Integrator in each communication with the data server. |
|
Yes |
SQL query to execute on the data server. The query must be a |
Examples
Generate the file C:\temp\clients.csv
separated by ;
containing the result of the query on the Customers
table.
OdiSqlUnload -FILE=C:\temp\clients.csv -DRIVER=sun.jdbc.odbc.JdbcOdbcDriver -URL=jdbc:odbc:NORTHWIND_ODBC -USER=sa -PASS=NFNEKKNGGJHAHBHDHEHJDBGBGFDGGH -FIELD_SEP=; "-DATE_FORMAT=dd/MM/yyyy hh:mm:ss" select cust_id, cust_name, cust_creation_date from Northwind.dbo.Customers
Use this command to start a Load Plan.
The -SYNC
parameter starts a load plan in synchronous or asynchronous mode. In synchronous mode, the tool ends with the same status as the completed load plan run.
Usage
OdiStartLoadPlan -LOAD_PLAN_NAME=<load_plan_name> [-LOG_LEVEL=<log_level>] [-CONTEXT=<context_code>] [-AGENT_URL=<agent_url>] [-AGENT_CODE=<logical_agent_code>] [-ODI_USER=<ODI User>] [-ODI_PASS=<ODI Password>] [-KEYWORDS=<Keywords>] [-<PROJECT_CODE>.<VARIABLE>=<var_value> ...] [-SYNC=<yes|no>] [-POLLINT=<msec>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Name of the load plan to start. |
|
No |
Level of logging information to retain. All sessions with a defined log level lower than or equal to this value are kept in the session log when the session completes. However, if object execution ends abnormally, all tasks are kept, regardless of this setting. Note that log level 6 has the same behavior as log level 5, but with the addition of variable and sequence tracking. See Tracking Variables and Sequences in Developing Integration Projects with Oracle Data Integrator for more information. |
|
Yes |
Code of the execution context. If this parameter is omitted, the load plan starts in the execution context of the calling session, if any. |
|
No |
URL of the remote agent that starts the load plan. |
|
No |
Code of the logical agent responsible for starting this load plan. If this parameter and |
|
No |
Oracle Data Integrator user to be used to start the load plan. The privileges of this user are used. If this parameter is omitted, the load plan is started with the privileges of the user launching the parent session. |
|
No |
Password of the Oracle Data Integrator user. This password must be encoded. This parameter is required if |
|
No |
Comma-separated list of keywords attached to this load plan. These keywords make load plan execution identification easier. |
|
No |
List of project or global variables whose value is set as the default for the execution of the load plan. Project variables should be named |
|
No |
Specifies whether the load plan should be executed synchronously or asynchronously. If set to Yes (synchronous mode), the load plan is started and runs to completion with a status of Done or Error before control is returned. If set to No (asynchronous mode), the load plan is started and control is returned before the load plan runs to completion. The default value is No. |
|
No |
The time in milliseconds to wait between polling the load plan run status for completion state. The |
Examples
Start load plan LOAD_DWH
in the GLOBAL
context on the same agent.
OdiStartLoadPlan -LOAD_PLAN_NAME=LOAD_DWH -CONTEXT=GLOBAL
Use this command to execute Oracle Warehouse Builder (OWB) objects from within Oracle Data Integrator and to retrieve the execution audit data into Oracle Data Integrator.
This command uses an Oracle Warehouse Builder runtime repository data server that can be created in Topology Navigator. This data server must connect as an Oracle Warehouse Builder user who can access an Oracle Warehouse Builder workspace. The physical schemas under this data server represent the Oracle Warehouse Builder workspaces that this user can access. For information about the Oracle Data Integrator topology, see Setting Up a Topology in Administering Oracle Data Integrator
Usage
OdiStartOwbJob -WORKSPACE=<logical_owb_repository> -LOCATION=<owb_location> -OBJECT_NAME=<owb_object> -OBJECT_TYPE=<owb_object_type> [-EXEC_PARAMS=<exec_params>] [-CONTEXT=<context_code>] [-LOG_LEVEL=<log_level>] [-SYNC_MODE=<1|2>] [-POLLINT=<n>] [-SESSION_NAME=<session_name>] [-KEYWORDS=<keywords>] [<OWB parameters>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Logical schema of the OWB Runtime Repository technology. This resolves to a physical schema that represents the Oracle Warehouse Builder workspace that contains the Oracle Warehouse Builder object to be executed. The Oracle Warehouse Builder workspace was chosen when you added a Physical Schema under the OWB Runtime Repository DataServer in Topology Navigator. The context for this mapping can also be specified using the |
|
Yes |
Name of the Oracle Warehouse Builder location that contains the Oracle Warehouse Builder object to be executed. This location must exist in the physical workspace that resolves from |
|
Yes |
Name of the Oracle Warehouse Builder object. This object must exist in |
|
Yes |
Type of Oracle Warehouse Builder object, for example:
|
|
No |
Custom and/or system parameters for the Oracle Warehouse Builder execution. |
|
No |
Execution context of the Oracle Warehouse Builder object. This is the context in which the logical workspace will be resolved. Studio editors use this value or the Default Context. Execution uses this value or the Parent Session context. |
|
No |
Log level (0-5). The default value is 5, which means that maximum details are captured in the log. |
|
No |
Synchronization mode of the Oracle Warehouse Builder job:
|
|
No |
The period of time in milliseconds to wait between each transfer of Oracle Warehouse Builder audit data to Oracle Data Integrator log tables. The default value is 0, which means that audit data is transferred at the end of the execution. |
|
No |
Name of the Oracle Warehouse Builder session as it appears in the log. |
|
No |
Comma-separated list of keywords attached to the session. |
|
No |
List of values for the Oracle Warehouse Builder parameters relevant to the object. This list is of the form |
Examples
Execute the Oracle Warehouse Builder process flow LOAD_USERS
that has been deployed to the Oracle Workflow DEV_OWF
.
OdiStartOwbJob -WORKSPACE=OWB_WS1 -CONTEXT=QA -LOCATION=DEV_OWF -OBJECT_NAME=LOAD_USERS -OBJECT_TYPE=PROCESSFLOW
Execute the Oracle Warehouse Builder PL/SQL map STAGE_USERS
that has been deployed to the database location DEV_STAGE
. Poll and transfer the Oracle Warehouse Builder audit data every 5 seconds. Pass the input parameter AGE_LIMIT
whose value is obtained from an Oracle Data Integrator variable, and specify an Oracle Warehouse Builder system parameter relevant to a PL/SQL map.
OdiStartOwbJob -WORKSPACE=OWB_WS1 -CONTEXT=QA -LOCATION=DEV_STAGE -OBJECT_NAME=STAGE_USERS -OBJECT_TYPE=PLSQLMAP -POLLINT=5000 -OWB_SYSTEM.MAX_NO_OF_ERRORS=25 -AGE_LIMIT=#VAR_MINAGE
Use this command to start a scenario.
The optional parameter -AGENT_CODE
is used to dedicate this scenario to another agent other than the current agent.
The parameter -SYNC_MODE
starts a scenario in synchronous or asynchronous mode.
Note:
The scenario that is started should be present in the repository into which the command is launched. If you go to production with a scenario, make sure to also take all scenarios called by your scenario using this command. The Solutions can help you group scenarios for this purpose.
Usage
OdiStartScen -SCEN_NAME=<scenario> -SCEN_VERSION=<version> [-CONTEXT=<context>] [-ODI_USER=<odi user> -ODI_PASS=<odi password>] [-SESSION_NAME=<session_name>] [-LOG_LEVEL=<log_level>] [-AGENT_CODE=<logical_agent_name>] [-SYNC_MODE=<1|2>] [-KEYWORDS=<keywords>] [-<VARIABLE>=<value>]*
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Name of the scenario to start. |
|
Yes |
Version of the scenario to start. If the version specified is -1, the last version of the scenario is executed. |
|
No |
Code of the execution context. If this parameter is omitted, the scenario is executed in the execution context of the calling session. |
|
No |
Oracle Data Integrator user to be used to run the scenario. The privileges of this user are used. If this parameter is omitted, the scenario is executed with privileges of the user launching the parent session. |
|
No |
Password of the Oracle Data Integrator user. This password should be encoded. This parameter is required if the user is specified. |
|
No |
Name of the session that will appear in the execution log. |
|
No |
Trace level (0 .. 5) to keep in the execution log. The default value is 5. |
|
No |
Name of the logical agent responsible for executing this scenario. If this parameter is omitted, the current agent executes this scenario. |
|
No |
Synchronization mode of the scenario:
|
|
No |
Comma-separated list of keywords attached to this session. These keywords make session identification easier. |
|
No |
List of variables whose value is set for the execution of the scenario. This list is of the form |
Examples
Start the scenario LOAD_DWH
in version 2
in the production context (synchronous mode).
OdiStartScen -SCEN_NAME=LOAD_DWH -SCEN_VERSION=2 -CONTEXT=CTX_PRODUCTION
Start the scenario LOAD_DWH
in version 2
in the current context in asynchronous mode on the agent UNIX Agent
while passing the values of the variables START_DATE
(local) and COMPANY_CODE
(global).
OdiStartScen -SCEN_NAME=LOAD_DWH -SCEN_VERSION=2 -SYNC_MODE=2 "-AGENT_CODE=UNIX Agent" -MY_PROJECT.START_DATE=10-APR-2002 -GLOBAL.COMPANY_CODE=SP4356
Use this command to extract an archive file to a directory.
Usage
OdiUnZip -FILE=<file> -TODIR=<target_directory> [-OVERWRITE=<yes|no>] [-ENCODING=<file_name_encoding>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Full path to the ZIP file to extract. |
|
Yes |
Destination directory or folder. |
|
No |
Indicates if the files that already exist in the target directory must be overwritten. The default value is No. |
|
No |
Character encoding used for file names inside the archive file. For a list of possible values, see:
Defaults to the platform's default character encoding. |
Examples
Extract the file archive_001.zip
from directory C:\archive\
into directory C:\TEMP
.
OdiUnZip "-FILE=C:\archive\archive_001.zip" -TODIR=C:\TEMP\
Use this command to unlock the ODI repository.
Note: Please note the following:
VCS Administrator privileges are required to run this command.
This tool can be run only from the command line.
Usage
OdiUnlockOdiRepository
Parameters
None
Use this command to force an agent to recalculate its schedule of tasks.
Usage
OdiUpdateAgentSchedule -AGENT_NAME=<physical_agent_name>
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Name of the physical agent to update. |
Examples
Cause the physical agent agt_s1
to update its schedule.
OdiUpdateAgentSchedule -AGENT_NAME=agt_s1
Use this command to wait for the child session (started using the OdiStartScen tool) of the current session to complete.
This command checks every <polling_interval>
to determine if the sessions launched from <parent_sess_number>
are finished. If all child sessions (possibly filtered by their name and keywords) are finished (status of Done, Warning, or Error), this command terminates.
Usage
OdiWaitForChildSession [-PARENT_SESS_NO=<parent_sess_number>] [-POLL_INT=<polling_interval>] [-SESSION_NAME_FILTER=<session_name_filter>] [-SESSION_KEYWORDS=<session_keywords>] [-MAX_CHILD_ERROR=ALL|<error_number>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
No |
ID of the parent session. If this parameter is not specified, the current session ID is used. |
|
No |
Interval in seconds between each sequence of termination tests for the child sessions. The default value is 1. |
|
No |
Only child sessions whose names match this filter are tested. This filter can be a SQL LIKE-formatted pattern. |
|
No |
Only child sessions for which ALL keywords have a match in this comma-separated list are tested. Each element of the list can be a SQL LIKE-formatted pattern. |
|
No |
This parameter enables OdiWaitForChildSession to terminate in error if a number of child sessions have terminated in error:
If this parameter is equal to 0, negative, or not specified, OdiWaitForChildSession never terminates in an error status, regardless of the number of failing child sessions. |
Examples
Wait and poll every 5 seconds for all child sessions of the current session with a name filter of LOAD%
and keywords MANDATORY
and CRITICAL
to finish.
OdiWaitForChildSession -PARENT_SESS_NO=<%=odiRef.getSession("SESS_NO")%> -POLL_INT=5 -SESSION_NAME_FILTER=LOAD% -SESSION_KEYWORDS=MANDATORY,CRITICAL
Use this command to wait for a number of rows in a table or set of tables. This can also be applied to a number of objects containing data, such as views.
The OdiWaitForData command tests that a table, or a set of tables, has been populated with a number of records. This test is repeated at regular intervals (-POLLINT
) until one of the following conditions is met: the desired number of rows for one of the tables has been detected (-UNIT_ROWCOUNT
), the desired, cumulated number of rows for all of the tables has been detected (-GLOBAL_ROWCOUNT
), or a timeout (-TIMEOUT
) has been reached.
Filters may be applied to the set of counted rows. They are specified by an explicit SQL where clause (-SQLFILTER
) and/or the -RESUME_KEY_xxx
parameters to determine field-value-operator clause. These two methods are cumulative (AND).
The row count may be considered either in absolute terms (with respect to the total number of rows in the table) or in differential terms (the difference between a stored reference value and the current row count value).
When dealing with multiple tables:
The -SQLFILTER
and -RESUME_KEY_xxx
parameters apply to ALL tables concerned.
The -UNIT_ROWCOUNT
parameter determines the row count to be expected for each table. The -GLOBAL_ROWCOUNT
parameter determines the SUM of the row count number cumulated over the set of tables. When only one table is concerned, the -UNIT_ROWCOUNT
and -GLOBAL_ROWCOUNT
parameters are equivalent.
Usage
OdiWaitForData -LSCHEMA=<logical_schema> -TABLE_NAME=<table_name> [-OBJECT_TYPE=<list of object types>] [-CONTEXT=<context>] [-RESUME_KEY_VARIABLE=<resumeKeyVariable> -RESUME_KEY_COL=<resumeKeyCol> [-RESUME_KEY_OPERATOR=<resumeKeyOperator>]|-SQLFILTER=<SQLFilter>] [-TIMEOUT=<timeout>] [-POLLINT=<pollInt>] [-GLOBAL_ROWCOUNT=<globalRowCount>] [-UNIT_ROWCOUNT=<unitRowCount>] [-TIMEOUT_WITH_ROWS_OK=<yes|no>] [-INCREMENT_DETECTION=<no|yes> [-INCREMENT_MODE=<M|P|I>] [-INCREMENT_SEQUENCE_NAME=<incrementSequenceName>]]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Logical schema containing the tables. |
|
Yes |
Table name, mask, or list of table names to check. This parameter accepts three formats:
|
|
No |
Type of objects to check. By default, only tables are checked. To take into account other objects, specify a comma-separated list of object types. Supported object types are:
|
|
No |
Context in which the logical schema will be resolved. If no context is specified, the execution context is used. |
|
No |
Explicit SQL filter to be applied to the table(s). This statement must be valid for the technology containing the checked tables. Note that this statement must not include the |
|
No |
The
|
|
No |
Maximum period of time in milliseconds over which data is polled. If this value is equal to 0, the timeout is infinite. The default value is 0. |
|
No |
The period of time in milliseconds to wait between data polls. The default value is 1000. |
|
No |
Number of rows expected in a polled table to terminate the command. The default value is 1. |
|
No |
Total number of rows expected cumulatively, over the set of tables, to terminate the command. If not specified, the default value 1 is used. |
|
No |
Defines the mode in which the command considers row count: either in absolute terms (with respect to the total number of rows in the table) or in differential terms (the difference between a stored reference value and the current row count value).
The default value is No. |
|
No |
This parameter specifies the persistence mode of the reference value between successive OdiWaitForData calls. Possible values are:
The default value is Note that using the Persistent or Initial modes is not supported when a mask or list of tables is polled. |
|
No |
This parameter specifies the name of an automatically allocated storage space used for reference value persistence. This increment sequence is stored in the Repository. If this name is not specified, it takes the name of the table. Note that this Increment Sequence is not an Oracle Data Integrator Sequence and cannot be used as such outside a call to OdiWaitForData. |
|
No |
If this value is set to Yes, at least one row was detected, and the timeout occurs before the expected number of rows has been inserted, the API exits with a return code of 0. Otherwise, it signals an error. The default value is Yes. |
Examples
Wait for the DE1P1
table in the ORA_WAITFORDATA
schema to contain 200 records matching the filter.
OdiWaitForData -LSCHEMA=ORA_WAITFORDATA -TABLE_NAME=DE1P1 -GLOBAL_ROWCOUNT=200 "-SQLFILTER=DATMAJ > to_date('#MAX_DE1_DATMAJ_ORACLE_CHAR', 'DD/MM/YYYY HH24:MI:SS')"
Wait for a maximum of 4 hours for new data to appear in either the CITY_SRC
or the CITY_TRG
table in the logical schema SQLSRV_SALES
.
OdiWaitForData -LSCHEMA=SQLSRV_SALES -TABLE_NAME=CITY% -TIMEOUT=14400000 -INCREMENT_DETECTION=yes
Use this command to wait for load plan runs to complete.
Usage
OdiWaitForLoadPlans [-PARENT_SESS_NO=<parent_sess_guid>] [-LP_NAME_FILTER=<load_plan_name_filter>] [-LP_KEYWORDS=<load_plan_keywords>] [-MAX_LP_ERROR=ALL|<number_of_lp_errors>] [-POLLINT=<polling_interval_msec>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
No |
Global ID of the parent session that started the load plan. If this parameter is not specified, the global ID of the current session is used. |
|
No |
Only load plan runs whose name matches this filter are tested for completion status. This filter can be a SQL LIKE-formatted pattern. |
|
No |
Only load plan runs whose keywords contain all entries in this comma-separated list are tested for completion status. Each element in the list can be a SQL LIKE-formatted pattern. |
|
No |
OdiWaitForLoadPlans terminates in error if a number of load plan runs are in Error status:
If this parameter is not specified or its value is less than 1, OdiWaitForLoadPlans never terminates in error, regardless of the number of load plan runs in Error status. |
|
No |
The time in milliseconds to wait between polling load plan runs status for completion state. The default value is 1000 (1 second). The value must be greater than 0. |
Examples
Wait and poll every 5 seconds for all load plan runs started by the current session with a name filter of POPULATE%
and keywords MANDATORY
and CRITICAL
to finish in a Done or Error status. If 2 or more load plan runs are in Error status when execution is complete for all selected load plan runs, OdiWaitForLoadPlans ends in error.
OdiWaitForLoadPlans -PARENT_SESS_NO=<%=odiRef.getSession("SESS_GUID")%> -LP_NAME_FILTER=POPULATE% -LP_KEYWORDS=MANDATORY,CRITICAL -POLLINT=5000 -MAX_LP_ERROR=2
Use this command to wait for a number of modifications to occur on a journalized table or a list of journalized tables.
The OdiWaitForLogData command determines whether rows have been modified on a table or a group of tables. These changes are detected using the Oracle Data Integrator changed data capture (CDC) in simple mode (using the -TABLE_NAME
parameter) or in consistent mode (using the -CDC_SET_NAME
parameter). The test is repeated every -POLLINT
milliseconds until one of the following conditions is met: the desired number of row modifications for one of the tables has been detected (-UNIT_ROWCOUNT
), the desired cumulative number of row modifications for all of the tables has been detected (-GLOBAL_ROWCOUNT
), or a timeout (-TIMEOUT
) has been reached.
Note:
This command takes into account all journalized operations (inserts, updates, and deletes).
The command is suitable for journalized tables only in simple or consistent mode.
Usage
OdiWaitForLogData -LSCHEMA=<logical_schema> -SUBSCRIBER_NAME=<subscriber_name> (-TABLE_NAME=<table_name> | -CDC_SET_NAME=<cdcSetName>) [-CONTEXT=<context>] [-TIMEOUT=<timeout>] [-POLLINT=<pollInt>] [-GLOBAL_ROWCOUNT=<globalRowCount>] [-UNIT_ROWCOUNT=<unitRowCount> [-OPTIMIZED_WAIT=<yes|no|AUTO>] [-TIMEOUT_WITH_ROWS_OK=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
No |
Context in which the logical schema will be resolved. If no context is specified, the execution context is used. |
|
No |
Total number of changes expected in the tables or the CDC set to end the command. The default value is 1. |
|
Yes |
Logical schema containing the journalized tables. |
|
No |
Method used to access the journals.
The default value is |
|
No |
The period of time in milliseconds to wait between polls. The default value is 2000. |
|
Yes |
Name of the subscriber used to get the journalizing information. |
|
Yes |
Journalized table name, mask, or list to check. This parameter accepts three formats:
Note that this option works only for tables in a model journalized in simple mode. This parameter cannot be used with |
|
Yes |
Name of the CDC set to check. This CDC set name is the fully qualified model code, typically It can be obtained in the current context using a substitution method API call, as shown below: Note that this option works only for tables in a model journalized in consistent mode. This parameter cannot be used with |
|
No |
Maximum period of time in milliseconds over which changes are polled. If this value is equal to 0, the timeout is infinite. The default value is 0. |
|
No |
If this parameter is set to Yes, at least one row was detected, and the timeout occurs before the predefined number of rows has been polled, the API exits with a return code of 0. Otherwise, it signals an error. The default value is Yes. |
|
No |
Number of changes expected in one of the polled tables to end the command. The default value is 1. Note that |
Examples
Wait for the CUSTOMERS
table in the SALES_APPLICATION
schema to have 200 row modifications recorded for the SALES_SYNC
subscriber.
OdiWaitForLogData -LSCHEMA=SALES_APPLICATION -TABLE_NAME=CUSTOMERS -GLOBAL_ROWCOUNT=200 -SUBSCRIBER_NAME=SALES_SYNC
Use this command to wait for a table to be created and populated with a predefined number of rows.
The OdiWaitForTable command regularly tests whether the specified table has been created and has been populated with a number of records. The test is repeated every -POLLINT
milliseconds until the table exists and contains the desired number of rows (-GLOBAL_ROWCOUNT
), or until a timeout (-TIMEOUT
) is reached.
Usage
OdiWaitForTable -CONTEXT=<context> -LSCHEMA=<logical_schema> -TABLE_NAME=<table_name> [-TIMEOUT=<timeout>] [-POLLINT=<pollInt>] [-GLOBAL_ROWCOUNT=<globalRowCount>] [-TIMEOUT_WITH_ROWS_OK=<yes|no>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
No |
Context in which the logical schema will be resolved. If no context is specified, the execution context is used. |
|
No |
Total number of rows expected in the table to terminate the command. The default value is 1. If not specified, the command finishes when a new row is inserted into the table. |
|
Yes |
Logical schema in which the table is searched for. |
|
No |
Period of time in milliseconds to wait between each test. The default value is 1000. |
|
Yes |
Name of table to search for. |
|
No |
Maximum time in milliseconds the table is searched for. If this value is equal to 0, the timeout is infinite. The default value is 0. |
|
No |
If this parameter is set to Yes, at least one row is detected, and the timeout occurs before the expected number of records is detected, the API exits with a return code of 0. Otherwise, it signals an error. The default value is Yes. |
Examples
Wait for the DE1P1
table in the ORA_WAITFORDATA
schema to exist, and to contain at least 1 record.
OdiWaitForTable -LSCHEMA=ORA_WAITFORDATA -TABLE_NAME=DE1P1 -GLOBAL_ROWCOUNT=1
Use this command to concatenate elements from multiple XML files into a single file.
This tool extracts all instances of a given element from a set of source XML files and concatenates them into one target XML file. The tool parses and generates well formed XML. It does not modify or generate a DTD for the generated files. A reference to an existing DTD can be specified in the -HEADER
parameter or preserved from the original files using -KEEP_XML_PROLOGUE
.
Note:
XML namespaces are not supported by this tool. Provide the local part of the element name (without the namespace or prefix value) in the -ELEMENT_NAME
parameter.
Usage
OdiXMLConcat -FILE=<file_filter> -TOFILE=<target_file> -XML_ELEMENT=<element_name> [-CHARSET_ENCODING=<encoding>] [-IF_FILE_EXISTS=<overwrite|skip|error>] [-KEEP_XML_PROLOGUE=<all|xml|doctype|none>] [-HEADER=<header>] [-FOOTER=<footer>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Filter for the source XML files. This filter uses standard file wildcards ( The following file filters are valid:
|
- |
Yes |
Target file into which the elements are concatenated. |
|
Yes |
Local name of the XML element (without enclosing Note that this element detection is not recursive. If a given instance of |
|
No |
Target files encoding. The default value is |
|
No |
Define behavior when the target file exists.
|
|
No |
Copies the source file XML prologue in the target file. Depending on this parameter's value, the following parts of the XML prologue are preserved:
Note: If all or part of the prologue is not preserved, it should be specified in the |
|
No |
String that is appended after the prologue (if any) in each target file. You can use this parameter to create a customized XML prologue or root element. |
|
No |
String that is appended at the end of each target file. You can use this parameter to close a root element added in the header. |
Examples
Concatenate the content of the IDOC elements in the files ord1.xml
, ord2.xml
, and so on in the ord_i
subfolder into the file MDSLS.TXT.XML
, with the root element <WMMBID02>
added to the target.
OdiXMLConcat "-FILE=./ord_i/ord*.xml" "-TOFILE=./MDSLS.TXT.XML" -XML_ELEMENT=IDOC "-CHARSET_ENCODING=UTF-8" -IF_FILE_EXISTS=overwrite -KEEP_XML_PROLOGUE=xml "-HEADER=<WMMBID02>" "-FOOTER=</WMMBID02>" OdiXMLConcat "-FILE=./o?d_*/ord*.xml" "-TOFILE=./MDSLS.TXT.XML" -XML_ELEMENT=IDOC "-CHARSET_ENCODING=UTF-8" -IF_FILE_EXISTS=overwrite -KEEP_XML_PROLOGUE=none "-HEADER=<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<WMMBID02>" "-FOOTER=</WMMBID02>"
Concatenate the EDI elements of the files ord1.xml
, ord2.xml
, and so on in the ord_i
subfolder into the file MDSLS2.XML
. This file will have the new root element EDI_BATCH
above all <EDI>
elements.
OdiXMLConcat "-FILE=./o?d_?/ord*.xml" "-TOFILE=./MDSLS2.XML" -XML_ELEMENT=EDI "-CHARSET_ENCODING=UTF-8" -IF_FILE_EXISTS=overwrite -KEEP_XML_PROLOGUE=xml "-HEADER= <EDI_BATCH>" "-FOOTER=</EDI_BATCH>"
This tool extracts all instances of a given element stored in a source XML file and splits it over several target XML files. This tool parses and generates well formed XML. It does not modify or generate a DTD for the generated files. A reference to an existing DTD can be specified in the -HEADER
parameter or preserved from the original files using -KEEP_XML_PROLOGUE
.
Note:
XML namespaces are not supported by this tool. Provide the local part of the element name (without the namespace or prefix value) in the -ELEMENT_NAME
parameter.
Usage
OdiXMLSplit -FILE=<file> -TOFILE=<file_pattern> -XML_ELEMENT=<element_name> [-CHARSET_ENCODING=<encoding>] [-IF_FILE_EXISTS=<overwrite|skip|error>] [-KEEP_XML_PROLOGUE=<all|xml|doctype|none>] [-HEADER=<header>] [-FOOTER=<footer>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes |
Source XML file to split. |
- |
Yes |
File pattern for the target files. Each file is named after a pattern containing a mask representing a generated number sequence or the value of an attribute of the XML element used to perform the split:
Note that the pattern can be used for creating different files within a directory or files in different directories. The following patterns are valid:
|
|
Yes |
Local name of the XML element (without enclosing Note that this element detection is not recursive. If a given instance of |
|
No |
Target files encoding. The default value is |
|
No |
Define behavior when the target file exists.
|
|
No |
Copies the source file XML prologue in the target file. Depending on this parameter's value, the following parts of the XML prologue are preserved:
Note: If all or part of the prologue is not preserved, it should be specified in the |
|
No |
String that is appended after the prologue (if any) in each target file. You can use this parameter to create a customized XML prologue or root element. |
|
No |
String that is appended at the end of each target file. You can use this parameter to close a root element added in the header. |
Examples
Split the file MDSLS.TXT.XML
into several files. The files ord1.xml
, ord2.xml
, and so on are created and contain each instance of the IDOC element contained in the source file.
OdiXMLSplit "-FILE=./MDSLS.TXT.XML" "-TOFILE=./ord_i/ord*.xml" -XML_ELEMENT=IDOC "-CHARSET_ENCODING=UTF-8" -IF_FILE_EXISTS=overwrite -KEEP_XML_PROLOGUE=xml "-HEADER= <WMMBID02>" "-FOOTER= </WMMBID02>"
Split the file MDSLS.TXT.XML
the same way as in the previous example except name the files using the value of the BEGIN
attribute of the IDOC element that is being split. The XML prologue is not preserved in this example but entirely generated in the header.
OdiXMLSplit "-FILE= ./MDSLS.TXT.XML" "-TOFILE=./ord_i/ord[BEGIN].xml" -XML_ELEMENT=IDOC "-CHARSET_ENCODING=UTF-8" -IF_FILE_EXISTS=overwrite -KEEP_XML PROLOGUE=none "-HEADER= <?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<WMMBID02>" "-FOOTER=</WMMBID02>"
Use this command to create a ZIP file from a directory or several files.
Usage
OdiZip -DIR=<directory> -FILE=<file> -TOFILE=<target_file> [-OVERWRITE=<yes|no>] [-RECURSE=<yes|no>] [-CASESENS=<yes|no>] [-ENCODING=<file_name_encoding>]
Parameters
Parameters | Mandatory | Description |
---|---|---|
|
Yes if |
Base directory (or folder) that will be the future root in the ZIP file to generate. If only |
|
Yes if |
Path from the base directory of the file(s) to archive. If only Use Examples:
|
|
Yes |
Target ZIP file. |
|
No |
Indicates whether the target ZIP file must be overwritten (Yes) or simply updated if it already exists (No). By default, the ZIP file is updated if it already exists. |
|
No |
Indicates if the archiving is recursive in the case of a directory that contains other directories. The value No indicates that only the files contained in the directory to copy (without the subfolders) are archived. |
|
No |
Indicates if file search is case-sensitive. By default, Oracle Data Integrator searches files in uppercase (set to No). |
|
No |
Character encoding to use for file names inside the archive file. For the list of supported encodings, see:
This defaults to the platform's default character encoding. |
Examples
Create an archive of the directory C:\Program files\odi
.
OdiZip "-DIR=C:\Program Files\odi" -FILE=*.* -TOFILE=C:\TEMP\odi_archive.zip
Create an archive of the directory C:\Program files\odi
while preserving the odi
directory in the archive.
OdiZip "-DIR=C:\Program Files" -FILE=odi\*.* -TOFILE=C:\TEMP\odi_archive.zip